Updates from: 08/15/2023 01:45:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Nok Nok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md
To get started, you need:
* If you don't have one, get a [Azure free account](https://azure.microsoft.com/free/) * An Azure AD B2C tenant linked to the Azure subscription * [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
-* Go to [noknok.com](https://noknok.com/products/strong-authentication-service/). On the top menu, select **Demo**.
+* Go to [noknok.com](https://noknok.com/). On the top menu, select **Demo**.
## Scenario description
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
# How Azure Active Directory provisioning integrates with SAP SuccessFactors
-[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [SAP SuccessFactors Employee Central](https://www.successfactors.com/products-services/core-hr-payroll/employee-central.html) to manage the identity life cycle of users. Azure Active Directory offers three prebuilt integrations:
+[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [SAP SuccessFactors Employee Central](https://www.sap.com/products/hcm/employee-central-payroll.html) to manage the identity life cycle of users. Azure Active Directory offers three prebuilt integrations:
* [SuccessFactors to on-premises Active Directory user provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) * [SuccessFactors to Azure Active Directory user provisioning](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)
If you want to exclude processing of prehires in the Onboarding module, update y
1. Save the mapping and validate that the scoping filter works using provisioning on demand. ### Enabling OData API Audit logs in SuccessFactors
-The Azure AD SuccessFactors connector uses SuccessFactors OData API to retrieve changes and provision users. If you observe issues with the provisioning service and want to confirm what data was retrieved from SuccessFactors, you can enable OData API Audit logs in SuccessFactors. To enable audit logs, follow the steps documented in [SAP support note 2680837](https://userapps.support.sap.com/sap/support/knowledge/en/2680837). Retrieve the request payload sent by Azure AD from the audit logs. To troubleshoot, you can copy this request payload in a tool like [Postman](https://www.postman.com/downloads/), set it up to use the same API user that is used by the connector and see if it returns the desired changes from SuccessFactors.
+The Azure AD SuccessFactors connector uses SuccessFactors OData API to retrieve changes and provision users. If you observe issues with the provisioning service and want to confirm what data was retrieved from SuccessFactors, you can enable OData API Audit logs in SuccessFactors. Retrieve the request payload sent by Azure AD from the audit logs. To troubleshoot, you can copy this request payload in a tool like [Postman](https://www.postman.com/downloads/), set it up to use the same API user that is used by the connector and see if it returns the desired changes from SuccessFactors.
## Writeback scenarios This section covers different write-back scenarios. It recommends configuration approaches based on how email and phone number is set up in SuccessFactors.
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
This article describes how to use the Microsoft Graph API and the Microsoft Grap
* If ***SkipOutOfScopeDeletions*** is set to 0 (false), accounts that go out of scope are disabled in the target. * If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope aren't disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
-Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). To successfully complete this procedure, you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
+Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox. To successfully complete this procedure, you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md). SkipOutOfScopeDeletions does not work for cross-tenant synchronization.
## Step 1: Retrieve your Provisioning App Service Principal ID (Object ID)
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
To use Application Proxy, you need a Windows server running Windows Server 2012
For high availability in your production environment, we recommend having more than one Windows server. For this tutorial, one Windows server is sufficient. > [!IMPORTANT]
-> If you are installing the connector on Windows Server 2019, you must disable HTTP2 protocol support in the WinHttp component for Kerberos Constrained Delegation to properly work. This is disabled by default in earlier versions of supported operating systems. Adding the following registry key and restarting the server disables it on Windows Server 2019. Note that this is a machine-wide registry key.
+> **.NET Framework**
+>
+> You must have .NET version 4.7.1 or higher to install, or upgrade, Application Proxy version 1.5.3437.0 or later. Windows Server 2012 R2 and Windows Server 2016 may not have this by default.
+>
+> See [How to: Determine which .NET Framework versions are installed](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) for more information.
+>
+> **HTTP 2.0**
+>
+> If you are installing the connector on Windows Server 2019, you must disable HTTP2 protocol support in the WinHttp component for Kerberos Constrained Delegation to properly work. This is disabled by default in earlier versions of supported operating systems. Adding the following registry key and restarting the server disables it on Windows Server 2019. Note that this is a machine-wide registry key.
> > ``` > Windows Registry Editor Version 5.00
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
This article provides you with the information you need to configure wildcard ap
## Pre-requisites Before you get started with Application Proxy Complex application scenario apps, make sure your environment is ready with the following settings and configurations:-- You need to enable Application Proxy and install a connector that has line of site to your applications. See the tutorial [Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad) to learn how to prepare your on-premises environment, install and register a connector, and test the connector.
+- You need to enable Application Proxy and install a connector that has line of sight to your applications. See the tutorial [Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad) to learn how to prepare your on-premises environment, install and register a connector, and test the connector.
## Configure application segment(s) for complex application.
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectors.md
Previously updated : 11/17/2022 Last updated : 08/09/2023
To deploy Application Proxy successfully, you need at least one connector, but w
### Windows Server You need a server running Windows Server 2012 R2 or later on which you can install the Application Proxy connector. The server needs to connect to the Application Proxy services in Azure, and the on-premises applications that you're publishing.
+Starting from the version 1.5.3437.0, having the .NET version 4.7.1 or greater is required for successful installation (upgrade).
+ The server needs to have TLS 1.2 enabled before you install the Application Proxy connector. To enable TLS 1.2 on the server: 1. Set the following registry keys:
The server needs to have TLS 1.2 enabled before you install the Application Prox
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2] [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client] "DisabledByDefault"=dword:00000000 "Enabled"=dword:00000001 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server] "DisabledByDefault"=dword:00000000 "Enabled"=dword:00000001
- [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319] "SchUseStrongCrypto"=dword:00000001
+ [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.8.4250.0] "SchUseStrongCrypto"=dword:00000001
``` A `regedit` file you can use to set these values follows:
The server needs to have TLS 1.2 enabled before you install the Application Prox
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server] "DisabledByDefault"=dword:00000000 "Enabled"=dword:00000001
- [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
+ [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.8.4250.0]
"SchUseStrongCrypto"=dword:00000001 ```
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| [IDmelon Technologies Inc.](https://www.idmelon.com/#idmelon) | ![y] | ![y]| ![y]| ![y]| ![n] | | [Kensington](https://www.kensington.com/solutions/product-category/why-biometrics/) | ![y] | ![y]| ![n]| ![n]| ![n] | | [KONA I](https://konai.com/business/security/fido) | ![y] | ![n]| ![y]| ![y]| ![n] |
-| [Movenda](https://www.movenda.com/en/authentication/fido2/overview) | ![y] | ![n]| ![y]| ![y]| ![n] |
| [NeoWave](https://neowave.fr/en/products/fido-range/) | ![n] | ![y]| ![y]| ![n]| ![n] | | [Nymi](https://www.nymi.com/nymi-band) | ![y] | ![n]| ![y]| ![n]| ![n] | | [Octatco](https://octatco.com/) | ![y] | ![y]| ![n]| ![n]| ![n] |
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
The following table lists partners who are Microsoft-compatible FIDO2 security k
| [IDmelon Technologies Inc.](https://www.idmelon.com/#idmelon) | ![y] | ![y]| ![y]| ![y]| ![n] | | [Kensington](https://www.kensington.com/solutions/product-category/why-biometrics/) | ![y] | ![y]| ![n]| ![n]| ![n] | | [KONA I](https://konai.com/business/security/fido) | ![y] | ![n]| ![y]| ![y]| ![n] |
-| [Movenda](https://www.movenda.com/en/authentication/fido2/overview) | ![y] | ![n]| ![y]| ![y]| ![n] |
| [NeoWave](https://neowave.fr/en/products/fido-range/) | ![n] | ![y]| ![y]| ![n]| ![n] | | [Nymi](https://www.nymi.com/nymi-band) | ![y] | ![n]| ![y]| ![n]| ![n] | | [Octatco](https://octatco.com/) | ![y] | ![y]| ![n]| ![n]| ![n] |
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md
In this tutorial you learn how to:
> * Set up authentication methods and registration options > * Test the SSPR process as a user
+> [!IMPORTANT]
+> In March 2023, we announced the deprecation of managing authentication methods in the legacy multifactor authentication (MFA) and self-service password reset (SSPR) policies. Beginning September 30, 2024, authentication methods can't be managed in these legacy MFA and SSPR policies. We recommend customers use the manual migration control to migrate to the Authentication methods policy by the deprecation date.
++ ## Video tutorial You can also follow along in a related video: [How to enable and configure SSPR in Azure AD](https://www.youtube.com/embed/rA8TvhNcCvQ?azure-portal=true).
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
Previously updated : 06/16/2023 Last updated : 08/09/2023 # Onboard a Google Cloud Platform (GCP) project
-This article describes how to onboard a Google Cloud Platform (GCP) project on Permissions Management.
+This article describes how to onboard a Google Cloud Platform (GCP) project in Microsoft Entra Permissions Management.
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md). ## Explanation
-For GCP, permissions management is scoped to a *GCP project*. A GCP project is a logical collection of your resources in GCP, like a subscription in Azure, albeit with further configurations you can perform such as application registrations and OIDC configurations.
+For GCP, Permissions Management is scoped to a *GCP project*. A GCP project is a logical collection of your resources in GCP, like a subscription in Azure, but with further configurations you can perform such as application registrations and OIDC configurations.
<!-- Diagram from Gargi-->
-There are several moving parts across GCP and Azure, which are required to be configured before onboarding.
+There are several moving parts across GCP and Azure, which should be configured before onboarding.
* An Azure AD OIDC App * A Workload Identity in GCP
There are several moving parts across GCP and Azure, which are required to be co
- In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
-1. On the **Data Collectors** tab, select **GCP**, and then select **Create Configuration**.
+1. On the **Data Collectors** tab, select **GCP**, then select **Create Configuration**.
### 1. Create an Azure AD OIDC app.
There are several moving parts across GCP and Azure, which are required to be co
1. To create the app registration, copy the script and run it in your command-line app. > [!NOTE]
- > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
+ > 1. To confirm the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
> 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your GCP account. > 1. Return to the Permissions Management window, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
Choose from three options to manage GCP projects.
#### Option 1: Automatically manage
-The automatically manage option allows projects to be automatically detected and monitored without extra configuration. Steps to detect list of projects and onboard for collection:
+The automatically manage option allows you to automatically detect and monitor projects without extra configuration. Steps to detect a list of projects and onboard for collection:
-1. Firstly, grant **Viewer** and **Security Reviewer** role to service account created in previous step at organization, folder or project scope.
+1. Grant **Viewer** and **Security Reviewer** roles to a service account created in the previous step at a project, folder or organization level.
-To enable controller mode 'On' for any projects, add following roles to the specific projects:
+To enable Controller mode **On** for any projects, add these roles to the specific projects:
- Role Administrators - Security Admin
-2. Once done, the steps are listed in the screen, which shows how to further configure in the GPC console, or programmatically with the gCloud CLI.
+The required commands to run in Google Cloud Shell are listed in the Manage Authorization screen for each scope of a project, folder or organization. This is also configured in the GPC console.
3. Select **Next**.
You have the ability to specify only certain GCP member projects to manage and m
2. You can choose to download and run the script at this point, or you can do it via Google Cloud Shell.
- To enable controller mode 'On' for any projects, add following roles to the specific projects:
+ To enable controller mode 'On' for any projects, add these roles to the specific projects:
- Role Administrators - Security Admin
You have the ability to specify only certain GCP member projects to manage and m
#### Option 3: Select authorization systems
-This option detects all projects that are accessible by the Cloud Infrastructure Entitlement Management application.
+This option detects all projects accessible by the Cloud Infrastructure Entitlement Management application.
-1. Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope
+1. Grant **Viewer** and **Security Reviewer** roles to a service account created in the previous step at a project, folder or organization level.
+
+To enable Controller mode **On** for any projects, add these roles to the specific projects:
+- Role Administrators
+- Security Admin
+
+The required commands to run in Google Cloud Shell are listed in the Manage Authorization screen for each scope of a project, folder or organization. This is also configured in the GPC console.
- To enable controller mode 'On' for any projects, add following roles to the specific projects:
- - Role Administrators
- - Security Admin
-2. Once done, the steps are listed in the screen to do configure manually in the GPC console, or programmatically with the gCloud CLI
3. Select **Next**.
This option detects all projects that are accessible by the Cloud Infrastructure
- In the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
- The following message appears: **Successfully Created Configuration.**
+ The following message appears: **Successfully Created Configuration**.
On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
- You have now completed onboarding GCP, and Permissions Management has started collecting and processing your data.
+ You've completed onboarding GCP, and Permissions Management has started collecting and processing your data.
### 4. View the data.
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
description: Learn how to use token protection in Conditional Access policies.
Previously updated : 07/18/2023 Last updated : 08/14/2023
Token protection (sometimes referred to as token binding in the industry) attempts to reduce attacks using token theft by ensuring a token is usable only from the intended device. When an attacker is able to steal a token, by hijacking or replay, they can impersonate their victim until the token expires or is revoked. Token theft is thought to be a relatively rare event, but the damage from it can be significant.
-Token protection creates a cryptographically secure tie between the token and the device (client secret) it's issued to. Without the client secret, the bound token is useless. When a user registers a Windows 10 or newer device in Azure AD, their primary identity is [bound to the device](../devices/concept-primary-refresh-token.md#how-is-the-prt-protected). What this means is that a policy can ensure that only bound sign-in session (or refresh) tokens, otherwise known as Primary Refresh Tokens (PRTs) are used by applications when requesting access to a resource.
+Token protection creates a cryptographically secure tie between the token and the device (client secret) it's issued to. Without the client secret, the bound token is useless. When a user registers a Windows 10 or newer device in Azure AD, their primary identity is [bound to the device](../devices/concept-primary-refresh-token.md#how-is-the-prt-protected). What this means: A policy can ensure that only bound sign-in session (or refresh) tokens, otherwise known as Primary Refresh Tokens (PRTs) are used by applications when requesting access to a resource.
> [!IMPORTANT] > Token protection is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
With this preview, we're giving you the ability to create a Conditional Access p
## Requirements
-This preview supports the following configurations:
+This preview supports the following configurations for access to resources with Token Protection conditional access policies applied:
* Windows 10 or newer devices that are Azure AD joined, hybrid Azure AD joined, or Azure AD registered. * OneDrive sync client version 22.217 or later * Teams native client version 1.6.00.1331 or later
+* Power BI desktop version 2.117.841.0 (May 2023) or later
+* Visual Studio 2022 or later when using the 'Windows authentication broker' Sign-in option
* Office Perpetual clients aren't supported ### Known limitations - External users (Azure AD B2B) aren't supported and shouldn't be included in your Conditional Access policy. - The following applications don't support signing in using protected token flows and users are blocked when accessing Exchange and SharePoint:
- - Power BI Desktop client
- PowerShell modules accessing Exchange, SharePoint, or Microsoft Graph scopes that are served by Exchange or SharePoint - PowerQuery extension for Excel - Extensions to Visual Studio Code which access Exchange or SharePoint
- - Visual Studio
- - The new Teams 2.1 preview client gets blocked after sign out due to a bug. This bug should be fixed in an August release.
+ - The new Teams 2.1 preview client gets blocked after sign out due to a bug. This bug should be fixed in a future service update.
- The following Windows client devices aren't supported: - Windows Server - Surface Hub - Windows-based Microsoft Teams Rooms (MTR) systems
+## Licensing requirements
++
+> [!NOTE]
+> Token Protection enforcement is part of Microsoft Entra ID Protection and will be part of the P2 license at general availability.
+ ## Deployment For users, the deployment of a Conditional Access policy to enforce token protection should be invisible when using compatible client platforms on registered devices and compatible applications.
You can also use [Log Analytics](../reports-monitoring/tutorial-log-analytics-wi
Here's a sample Log Analytics query searching the non-interactive sign-in logs for the last seven days, highlighting **Blocked** versus **Allowed** requests by **Application**. These queries are only samples and are subject to change. > [!NOTE]
-> **Sign In logs output:** The value of the string used in "enforcedSessionControls" and "sessionControlsNotSatisfied" changed from "Binding" to "SignInTokenProtection" in late June 2023. Queries on Sign In Log data should be updated to reflect this change.
+> **Sign In logs output:** The value of the string used in "enforcedSessionControls" and "sessionControlsNotSatisfied" changed from "Binding" to "SignInTokenProtection" in late June 2023. Queries on Sign In Log data should be updated to reflect this change. The examples cover both values to include historical data.
```kusto //Per Apps query
AADNonInteractiveUserSignInLogs
//Add userPrinicpalName if you want to filter // | where UserPrincipalName =="<user_principal_Name>" | mv-expand todynamic(ConditionalAccessPolicies)
-| where ConditionalAccessPolicies ["enforcedSessionControls"] contains '["SignInTokenProtection"]'
+| where ConditionalAccessPolicies ["enforcedSessionControls"] contains '["Binding"]' or ConditionalAccessPolicies ["enforcedSessionControls"] contains '["SignInTokenProtection"]'
| where ConditionalAccessPolicies.result !="reportOnlyNotApplied" and ConditionalAccessPolicies.result !="notApplied" | extend SessionNotSatisfyResult = ConditionalAccessPolicies["sessionControlsNotSatisfied"]
-| extend Result = case (SessionNotSatisfyResult contains 'SignInTokenProtection', 'Block','Allow')
+| extend Result = case (SessionNotSatisfyResult contains 'SignInTokenProtection' or SessionNotSatisfyResult contains 'SignInTokenProtection', 'Block','Allow')
| summarize by Id,UserPrincipalName, AppDisplayName, Result | summarize Requests = count(), Users = dcount(UserPrincipalName), Block = countif(Result == "Block"), Allow = countif(Result == "Allow"), BlockedUsers = dcountif(UserPrincipalName, Result == "Block") by AppDisplayName | extend PctAllowed = round(100.0 * Allow/(Allow+Block), 2)
AADNonInteractiveUserSignInLogs
//Add userPrincipalName if you want to filter // | where UserPrincipalName =="<user_principal_Name>" | mv-expand todynamic(ConditionalAccessPolicies)
-| where ConditionalAccessPolicies.enforcedSessionControls contains '["SignInTokenProtection"]'
+| where ConditionalAccessPolicies ["enforcedSessionControls"] contains '["Binding"]' or ConditionalAccessPolicies ["enforcedSessionControls"] contains '["SignInTokenProtection"]'
| where ConditionalAccessPolicies.result !="reportOnlyNotApplied" and ConditionalAccessPolicies.result !="notApplied" | extend SessionNotSatisfyResult = ConditionalAccessPolicies.sessionControlsNotSatisfied
-| extend Result = case (SessionNotSatisfyResult contains 'SignInTokenProtection', 'Block','Allow')
+| extend Result = case (SessionNotSatisfyResult contains 'SignInTokenProtection' or SessionNotSatisfyResult contains 'SignInTokenProtection', 'Block','Allow')
| summarize by Id, UserPrincipalName, AppDisplayName, ResourceDisplayName,Result | summarize Requests = count(),Block = countif(Result == "Block"), Allow = countif(Result == "Allow") by UserPrincipalName, AppDisplayName,ResourceDisplayName | extend PctAllowed = round(100.0 * Allow/(Allow+Block), 2)
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
After administrators confirm the settings using [report-only mode](howto-conditi
Administrators will now have to issue Temporary Access Pass credentials to new users so they can satisfy the requirements for multifactor authentication to register. Steps to accomplish this task, are found in the section [Create a Temporary Access Pass in the Azure AD Portal](../authentication/howto-authentication-temporary-access-pass.md#create-a-temporary-access-pass).
-Organizations may choose to require other grant controls with or in place of **Require multifactor authentication** at step 7a. When selecting multiple controls, be sure to select the appropriate radio button toggle to require **all** or **one** of the selected controls when making this change.
+Organizations may choose to require other grant controls with or in place of **Require multifactor authentication** at step 8a. When selecting multiple controls, be sure to select the appropriate radio button toggle to require **all** or **one** of the selected controls when making this change.
### Guest user registration
active-directory App Sign In Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-sign-in-flow.md
Previously updated : 02/17/2023 Last updated : 08/11/2023
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
Previously updated : 02/17/2023 Last updated : 08/17/2023
active-directory Howto Call A Web Api With Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-curl.md
Title: Call an ASP.NET Core web API with cURL description: Learn how to call a protected ASP.NET Core Web API using the Microsoft identity platform with cURL-+ -+ +++ Last updated 03/14/2023 zone_pivot_groups: web-api-howto-prereq
active-directory Howto Call A Web Api With Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-postman.md
Title: Call an ASP.NET Core web API with Postman description: Learn how to call a protected ASP.NET Core Web API using the Microsoft identity platform and Postman-+ -+ +++ Last updated 05/25/2023 zone_pivot_groups: web-api-howto-prereq
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Previously updated : 04/27/2023 Last updated : 08/11/2023
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
Previously updated : 04/19/2023 Last updated : 07/11/2023
If you don't have a tenant associated with your account, you'll see a GUID under
### Create a new Azure AD tenant
-If you don't already have an Azure AD tenant or if you want to create a new one for development, see [Create a new tenant in Azure AD](../fundamentals/active-directory-access-create-new-tenant.md). Or use the [directory creation experience](https://portal.azure.com/#create/Microsoft.AzureActiveDirectory) in the Azure portal.
+If you don't already have an Azure AD tenant or if you want to create a new one for development, see [Create a new tenant in Azure AD](../fundamentals/active-directory-access-create-new-tenant.md) or use the [directory creation experience](https://portal.azure.com/#create/Microsoft.AzureActiveDirectory) in the Azure portal. If you want to create a tenant for app testing, see [build a test environment](test-setup-environment.md).
You'll provide the following information to create your new tenant:
active-directory Single Page App Tutorial 01 Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-01-register-app.md
Title: "Tutorial: Register a Single-page application with the Microsoft identity platform" description: Register an application in an Azure Active Directory tenant.+ - + ++ Last updated 02/27/2023 #Customer intent: As a React developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue access tokens to client applications that request them.
active-directory Single Page App Tutorial 02 Prepare Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-02-prepare-spa.md
Title: "Tutorial: Prepare an application for authentication" description: Register a tenant application and configure it for a React SPA.+ - + ++ Last updated 02/27/2023 #Customer intent: As a React developer, I want to know how to create a new React project in an IDE and add authentication.
active-directory Single Page App Tutorial 03 Sign In Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-03-sign-in-users.md
Title: "Tutorial: Create components for sign in and sign out in a React single-page app" description: Add sign in and sign out components to your React single-page app+ -+ ++ Last updated 02/28/2023
active-directory Single Page App Tutorial 04 Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-04-call-api.md
Title: "Tutorial: Call an API from a React single-page app" description: Call an API from a React single-page app.+ -+ ++ Last updated 11/28/2022 #Customer intent: As a React developer, I want to know how to create a user interface and access the Microsoft Graph API
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Previously updated : 03/16/2023 Last updated : 08/11/2023
active-directory Web Api Tutorial 01 Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-tutorial-01-register-app.md
Title: "Tutorial: Register a web API with the Microsoft identity platform" description: In this tutorial, you learn how to register a web API with the Microsoft identity platform.+ +++ - Last updated 11/1/2022 #Customer intent: As an application developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue access tokens to client applications that request them.
active-directory Web Api Tutorial 02 Prepare Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-tutorial-02-prepare-api.md
Title: "Tutorial: Create and configure an ASP.NET Core project for authentication" description: "Create and configure the API in an IDE, add configuration for authentication and install required packages"+ +++ - Last updated 11/1/2022 #Customer intent: As an application developer, I want to create an ASP.NET Core project in an IDE, then configure it in such a way that I can add authentication with Azure AD.
active-directory Web Api Tutorial 03 Protect Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-tutorial-03-protect-endpoint.md
Title: "Tutorial: Implement a protected endpoint to your API" description: Protect the endpoint of an API, then run it to ensure it's listening for HTTP requests.+ +++ - Last updated 11/1/2022 #Customer intent: As an application developer I want to protect the endpoint of my API and run it to ensure it is listening for HTTP requests
active-directory Web App Tutorial 01 Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-01-register-application.md
Title: "Tutorial: Register an application with the Microsoft identity platform" description: In this tutorial, you learn how to register a web application with the Microsoft identity platform.+ +++ - Last updated 02/09/2023 #Customer intent: As an application developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue access tokens to client applications that request them.
active-directory Web App Tutorial 02 Prepare Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-02-prepare-application.md
Title: "Tutorial: Prepare a web application for authentication" description: Prepare an ASP.NET Core application for authentication using Visual Studio.+ +++ - Last updated 02/09/2023 #Customer intent: As an application developer, I want to use an IDE to set up an ASP.NET Core project, set up and upload a self signed certificate to the Azure portal and configure the application for authentication.
active-directory Web App Tutorial 03 Sign In Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-03-sign-in-users.md
Title: "Tutorial: Add sign in to an application" description: Add sign in to an ASP.NET Core application using Visual Studio.+ +++ - Last updated 02/09/2023 #Customer intent: As an application developer, I want to install the NuGet packages necessary for authentication in my IDE, and implement authentication in my web app.
active-directory Web App Tutorial 04 Call Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-04-call-web-api.md
Title: "Tutorial: Call an API and display the results" description: Call an API and display the results.+ +++ - Last updated 02/09/2023 #Customer intent: As an application developer, I want to use my app to call a web API, in this case Microsoft Graph. I need to know how to modify my code so the API can be called successfully.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
The AADLoginForWindows extension must be installed successfully for the VM to co
| Command to run | Expected output | | | |
- | `curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01"` | Correct information about the Azure VM |
- | `curl -H Metadata:true "http://169.254.169.254/metadata/identity/info?api-version=2018-02-01"` | Valid tenant ID associated with the Azure subscription |
- | `curl -H Metadata:true "http://169.254.169.254/metadata/identity/oauth2/token?resource=urn:ms-drs:enterpriseregistration.windows.net&api-version=2018-02-01"` | Valid access token issued by Azure Active Directory for the managed identity that is assigned to this VM |
+ | `curl.exe -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01"` | Correct information about the Azure VM |
+ | `curl.exe -H Metadata:true "http://169.254.169.254/metadata/identity/info?api-version=2018-02-01"` | Valid tenant ID associated with the Azure subscription |
+ | `curl.exe -H Metadata:true "http://169.254.169.254/metadata/identity/oauth2/token?resource=urn:ms-drs:enterpriseregistration.windows.net&api-version=2018-02-01"` | Valid access token issued by Azure Active Directory for the managed identity that is assigned to this VM |
> [!NOTE] > You can decode the access token by using a tool like [calebb.net](http://calebb.net/). Verify that the `oid` value in the access token matches the managed identity that's assigned to the VM.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 07/28/2023 Last updated : 08/14/2023
When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on July 28th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on August 14th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 Domestic Calling Plan (120 Minutes) | MCOPSTN_5 | 11dee6af-eca8-419f-8061-6864517c1875 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | MICROSOFT 365 DOMESTIC CALLING PLAN (120 min) (54a152dc-90de-4996-93d2-bc47e670fc06) | | Microsoft 365 Domestic Calling Plan for GCC | MCOPSTN_1_GOV | 923f58ab-fca1-46a1-92f9-89fda21238a8 | MCOPSTN1_GOV (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Domestic Calling for Government (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8) | | Microsoft 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Viva Engage Core (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
+| Microsoft 365 E3 Extra Features | Microsoft_365_E3_Extra_Features | f5b15d67-b99e-406b-90f1-308452f94de6 | Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
| Microsoft 365 E3 - Unattended License | SPE_E3_RPA1 | c2ac2ee4-9bb1-47e4-8541-d689c7e83371 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | | Microsoft 365 E3 (500 seats min) HUB | Microsoft_365_E3 | 0c21030a-7e60-4ec7-9a0f-0042e0e0211a | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | | Microsoft 365 E3_USGOV_DOD | SPE_E3_USGOV_DOD | d61d61cc-f992-433f-a577-5bd016037eeb | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Defender Vulnerability Management | TVM_Premium_Standalone | 1925967e-8013-495f-9644-c99f8b463748 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) | | Microsoft Defender Vulnerability Management Add-on | TVM_Premium_Add_on | ad7a56e0-6903-4d13-94f3-5ad491e78960 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) | | Microsoft Dynamics CRM Online | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
+| Microsoft Fabric (Free) | POWER_BI_STANDARD | a403ebcc-fae0-4ca2-8c8c-7a907fd6c235 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI (free) (2049e525-b859-401b-b2a0-e0a31c4b1fe4) |
+| Microsoft Fabric (Free) for faculty | POWER_BI_STANDARD_FACULTY | ade29b5f-397e-4eb9-a287-0344bd46c68d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI (free) (2049e525-b859-401b-b2a0-e0a31c4b1fe4) |
+|Microsoft Fabric (Free) for student | POWER_BI_STANDARD_STUDENT | bdcaf6aa-04c1-4b8f-b64e-6e3bd505ac64 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI (free) (2049e525-b859-401b-b2a0-e0a31c4b1fe4) |
| Microsoft Imagine Academy | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | | Microsoft Intune Device | INTUNE_A_D | 2b317a4a-77a6-4188-9437-b68a77b4e2c6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Intune Device for Government | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Rooms Basic for EDU | Microsoft_Teams_Rooms_Basic_FAC | a4e376bd-c61e-4618-9901-3fc0cb1b88bb | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Teams_Room_Basic (8081ca9c-188c-4b49-a8e5-c23b5e9463a8)<br/>Teams_Room_Pro (ec17f317-f4bc-451e-b2da-0167e5c260f9)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Teams Rooms Basic (8081ca9c-188c-4b49-a8e5-c23b5e9463a8)<br/>Teams Rooms Pro (ec17f317-f4bc-451e-b2da-0167e5c260f9)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Pro | Microsoft_Teams_Rooms_Pro | 4cde982a-ede4-4409-9ae6-b003453c8ea6 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Microsoft Teams Rooms Pro for EDU | Microsoft_Teams_Rooms_Pro_FAC | c25e2b36-e161-4946-bef2-69239729f690 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MTRProManagement (ecc74eae-eeb7-4ad5-9c88-e8b2bfca75b8)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams_Room_Basic (8081ca9c-188c-4b49-a8e5-c23b5e9463a8)<br/>Teams_Room_Pro (ec17f317-f4bc-451e-b2da-0167e5c260f9)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Microsoft Teams Rooms Pro Management (ecc74eae-eeb7-4ad5-9c88-e8b2bfca75b8)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams Rooms Test 1 (8081ca9c-188c-4b49-a8e5-c23b5e9463a8)<br/>Teams Rooms Test 2 (ec17f317-f4bc-451e-b2da-0167e5c260f9)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft Intune Plan 1 (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| Microsoft Teams Rooms Pro without Audio Conferencing | Microsoft_Teams_Rooms_Pro_without_Audio_Conferencing | 21943e3a-2429-4f83-84c1-02735cd49e78 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams_Room_Standard (92c6b761-01de-457a-9dd9-793a975238f7)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Premium Plan 1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams Room Standard (92c6b761-01de-457a-9dd9-793a975238f7)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Intune Plan 1 (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Teams Shared Devices | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) |
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
# Authentication and Conditional Access for External Identities
+> [!TIP]
+> This article applies to B2B collaboration and B2B direct connect. If your tenant is configured for customer identity and access management, see [Security and governance in Azure AD for customers](customers/concept-security-customers.md).
+ When an external user accesses resources in your organization, the authentication flow is determined by the collaboration method (B2B collaboration or B2B direct connect), user's identity provider (an external Azure AD tenant, social identity provider, etc.), Conditional Access policies, and the [cross-tenant access settings](cross-tenant-access-overview.md) configured both in the user's home tenant and the tenant hosting resources. This article describes the authentication flow for external users who are accessing resources in your organization. Organizations can enforce multiple Conditional Access policies for their external users, which can be enforced at the tenant, app, or individual user level in the same way that they're enabled for full-time employees and members of the organization.
active-directory Tutorial Desktop App Maui Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-app-maui-sign-in-sign-out.md
The next steps will organize our code so that the `main view` is defined.
1. Select **Add**. 1. The _MainView.xaml_ file will open in a new document tab, displaying all of the XAML markup that represents the UI of the page. Replace the XAML markup with the following markup: - :::code language="xaml" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/MainView.xaml" ::: 1. Save the file.
The next step is to add the code for the button's `Clicked` event.
:::code language="csharp" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/MainView.xaml.cs" :::
-The `MainView` class is a content page responsible for displaying the main view of the app. In the constructor, it retrieves the cached user account using the `MSALClientHelper` from the `PublicClientSingleton` instance and enables the sign-in button, if no cached user account is found.
+The `MainView` class is a content page responsible for displaying the main view of the app. In the constructor, it retrieves the cached user account using the `MSALClientHelper` from the `PublicClientSingleton` instance and enables the sign-in button, if no cached user account is found.
When the sign-in button is clicked, it calls the `AcquireTokenSilentAsync` method to acquire a token silently and navigates to the `claimsview` page using the `Shell.Current.GoToAsync` method. Additionally, the `OnBackButtonPressed` method is overridden to return true, indicating that the back button is disabled for this view.
The next steps will organize the code so that `ClaimsView` page is defined. The
1. Select **Add**. 1. The _ClaimsView.xaml_ file will open in a new document tab, displaying all of the XAML markup that represents the UI of the page. Replace the XAML markup with the following markup: - :::code language="xaml" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/ClaimsView.xaml" :::
- This XAML markup code represents the UI layout for a claim view in a .NET MAUI app. It starts by defining the `ContentPage` with a title and disabling the back button behavior.
-
- Inside a `VerticalStackLayout`, there are several `Label` elements displaying static text, followed by a `ListView` named `Claims` that binds to a collection called `IdTokenClaims` to display the claims found in the ID token. Each claim is rendered within a `ViewCell` using a `DataTemplate` and displayed as a centered `Label` within a Grid.
-
- Lastly, there's a `Sign Out` button centered at the bottom of the layout, which triggers the `SignOutButton_Clicked` event handler when clicked.
+ This XAML markup code represents the UI layout for a claim view in a .NET MAUI app. It starts by defining the `ContentPage` with a title and disabling the back button behavior.
+
+ Inside a `VerticalStackLayout`, there are several `Label` elements displaying static text, followed by a `ListView` named `Claims` that binds to a collection called `IdTokenClaims` to display the claims found in the ID token. Each claim is rendered within a `ViewCell` using a `DataTemplate` and displayed as a centered `Label` within a Grid.
+
+ Lastly, there's a `Sign Out` button centered at the bottom of the layout, which triggers the `SignOutButton_Clicked` event handler when clicked.
#### Handle the ClaimsView data
The next step is to add the code to handle `ClaimsView` data.
:::code language="csharp" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/ClaimsView.xaml.cs" :::
- The _ClaimsView.xaml.cs_ code represents the code-behind for a claim view in a .NET MAUI app. It starts by importing the necessary namespaces and defining the `ClaimsView` class, which extends `ContentPage`. The `IdTokenClaims` property is an enumerable of strings, initially set to a single string indicating no claims found.
+ The _ClaimsView.xaml.cs_ code represents the code-behind for a claim view in a .NET MAUI app. It starts by importing the necessary namespaces and defining the `ClaimsView` class, which extends `ContentPage`. The `IdTokenClaims` property is an enumerable of strings, initially set to a single string indicating no claims found.
The `ClaimsView` constructor sets the binding context to the current instance, initializes the view components, and calls the `SetViewDataAsync` method asynchronously. The `SetViewDataAsync` method attempts to acquire a token silently, retrieves the claims from the authentication result, and sets the `IdTokenClaims` property to display them in the `ListView` named `Claims`. If a `MsalUiRequiredException` occurs, indicating that user interaction is needed for authentication, the app navigates to the claims view.
To create `appsettings.json`, follow these steps:
Set the **Debug Target** in the Visual Studio toolbar to the device you want to debug and test with. The following steps demonstrate setting the **Debug Target** to _Windows_: 1. Select **Debug Target** drop-down.
-1. Select **Framework**
+1. Select **Framework**
1. Select **net7.0-windows...** Run the app by pressing _F5_ or select the _play button_ at the top of Visual Studio.
Run the app by pressing _F5_ or select the _play button_ at the top of Visual St
## Next Steps -- [Customize the default branding](how-to-customize-branding-customers.md).-- [Configure sign-in with Google](how-to-google-federation-customers.md).
+> [!div class="nextstepaction"]
+> [Tutorial: Add app roles to .NET MAUI app and receive them in the ID token](tutorial-desktop-maui-role-based-access-control.md)
active-directory Tutorial Desktop Maui Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-maui-role-based-access-control.md
+
+ Title: "Tutorial: Use role-based access control in your .NET MAUI"
+description: This tutorial demonstrates how to add app roles to .NET Multi-platform App UI (.NET MAUI) shell and receive them in the ID token.
+++++++ Last updated : 07/17/2023++
+# Tutorial: Use role-based access control in your .NET MAUI
+
+This tutorial demonstrates how to add app roles to .NET Multi-platform App UI (.NET MAUI) and receive them in the ID token.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Access the roles in the ID token.
+
+## Prerequisites
+
+- [Tutorial: Sign in users in .NET MAUI shell app](tutorial-desktop-app-maui-sign-in-sign-out.md)
+- [Using role-based access control for applications](how-to-use-app-roles-customers.md)
+
+## Receive groups and roles claims in .NET MAUI
+
+Once you configure your customer's tenant, you can retrieve your roles and groups claims in your client app. The roles and groups claims are both present in the ID token and the access token. Access tokens are only validated in the web APIs for which they were acquired by a client. The client shouldn't validate access tokens.
+
+The .NET MAUI needs to check for the app roles claims in the ID token to implement authorization in the client side.
+
+In this tutorial series, you created a .NET MAUI app where you developed the [_ClaimsView.xaml.cs_](tutorial-desktop-app-maui-sign-in-sign-out.md#handle-the-claimsview-data) to handle `ClaimsView` data. In this file, we inspect the contents of ID tokens. The value of the roles claim is checked in the following code snippet:
+
+To access the role claim, you can modify the code snippet as follows:
+
+```csharp
+var idToken = PublicClientSingleton.Instance.MSALClientHelper.AuthResult.IdToken;
+var handler = new JwtSecurityTokenHandler();
+var token = handler.ReadJwtToken(idToken);
+// Get the role claim value
+var roleClaim = token.Claims.FirstOrDefault(c => c.Type == "roles")?.Value;
+
+if (!string.IsNullOrEmpty(roleClaim))
+{
+ // If the role claim exists, add it to the IdTokenClaims
+ IdTokenClaims = new List<string> { roleClaim };
+}
+else
+{
+ // If the role claim doesn't exist, add a message indicating that no role claim was found
+ IdTokenClaims = new List<string> { "No role claim found in ID token" };
+}
+
+Claims.ItemsSource = IdTokenClaims;
+```
+
+> [!NOTE]
+> To read the ID token, you must install the `System.IdentityModel.Tokens.Jwt` package.
+
+If you assign a user to multiple roles, the roles string contains all roles separated by a comma, such as `Orders.Manager, Store.Manager,...`. Make sure you build your application to handle the following conditions:
+
+- Absence of roles claims in the token
+- User hasn't been assigned to any role
+- Multiple values in the roles claim when you assign a user to multiple roles
+
+When you define app roles for your app, it is your responsibility to implement authorization logic for those roles.
+
+## Next steps
+
+For more information about group claims and making informed decisions regarding the usage of app roles or groups, see:
+
+- [Configuring group claims and app roles in tokens](/security/zero-trust/develop/configure-tokens-group-claims-app-roles)
+- [Choose an approach](../../develop/custom-rbac-for-developers.md#choose-an-approach)
active-directory Tutorial Mobile App Maui Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-mobile-app-maui-sign-in-sign-out.md
The next step is to add the code for the button's `Clicked` event.
:::code language="csharp" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/MainView.xaml.cs" :::
- The `MainView` class is a content page responsible for displaying the main view of the app. In the constructor, it retrieves the cached user account using the `MSALClientHelper` from the `PublicClientSingleton` instance and enables the sign-in button, if no cached user account is found.
-
+ The `MainView` class is a content page responsible for displaying the main view of the app. In the constructor, it retrieves the cached user account using the `MSALClientHelper` from the `PublicClientSingleton` instance and enables the sign-in button, if no cached user account is found.
+ When the sign-in button is clicked, it calls the `AcquireTokenSilentAsync` method to acquire a token silently and navigates to the `claimsview` page using the `Shell.Current.GoToAsync` method. Additionally, the `OnBackButtonPressed` method is overridden to return true, indicating that the back button is disabled for this view. ### Add claims view page
The next steps will organize the code so that `ClaimsView` page is defined. The
:::code language="xaml" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/ClaimsView.xaml" :::
- This XAML markup code represents the UI layout for a claim view in a .NET MAUI app. It starts by defining the `ContentPage` with a title and disabling the back button behavior.
-
- Inside a `VerticalStackLayout`, there are several `Label` elements displaying static text, followed by a `ListView` named `Claims` that binds to a collection called `IdTokenClaims` to display the claims found in the ID token. Each claim is rendered within a `ViewCell` using a `DataTemplate` and displayed as a centered `Label` within a Grid.
-
- Lastly, there's a `Sign Out` button centered at the bottom of the layout, which triggers the `SignOutButton_Clicked` event handler when clicked.
+ This XAML markup code represents the UI layout for a claim view in a .NET MAUI app. It starts by defining the `ContentPage` with a title and disabling the back button behavior.
+
+ Inside a `VerticalStackLayout`, there are several `Label` elements displaying static text, followed by a `ListView` named `Claims` that binds to a collection called `IdTokenClaims` to display the claims found in the ID token. Each claim is rendered within a `ViewCell` using a `DataTemplate` and displayed as a centered `Label` within a Grid.
+
+ Lastly, there's a `Sign Out` button centered at the bottom of the layout, which triggers the `SignOutButton_Clicked` event handler when clicked.
#### Handle the ClaimsView data
The next step is to add the code to handle `ClaimsView` data.
:::code language="csharp" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Views/ClaimsView.xaml.cs" :::
- The _ClaimsView.xaml.cs_ code represents the code-behind for a claim view in a .NET MAUI app. It starts by importing the necessary namespaces and defining the `ClaimsView` class, which extends `ContentPage`. The `IdTokenClaims` property is an enumerable of strings, initially set to a single string indicating no claims found.
+ The _ClaimsView.xaml.cs_ code represents the code-behind for a claim view in a .NET MAUI app. It starts by importing the necessary namespaces and defining the `ClaimsView` class, which extends `ContentPage`. The `IdTokenClaims` property is an enumerable of strings, initially set to a single string indicating no claims found.
The `ClaimsView` constructor sets the binding context to the current instance, initializes the view components, and calls the `SetViewDataAsync` method asynchronously. The `SetViewDataAsync` method attempts to acquire a token silently, retrieves the claims from the authentication result, and sets the `IdTokenClaims` property to display them in the `ListView` named `Claims`. If a `MsalUiRequiredException` occurs, indicating that user interaction is needed for authentication, the app navigates to the claims view.
The `AppShell` class defines an app's visual hierarchy, the XAML markup used in
1. In the **Solution Explorer** pane of Visual Studio, expand the **AppShell.xaml** file to reveal its code-behind file **AppShell.xaml.cs**. Open the **AppShell.xaml.cs** and replace the content of the file with following code: :::code language="csharp" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/AppShell.xaml.cs" :::
-
+ You update the `AppShell.xaml.cs` file to include the necessary route registrations for the `MainView` and `ClaimsView`. By calling the `InitializeComponent()` method, you ensure the initialization of the `AppShell` class. The `RegisterRoute()` method associate the `mainview` and `claimsview` routes with their respective view types, `MainView` and `ClaimsView`. ## Add platform-specific code
A .NET MAUI app project contains a Platforms folder, with each child folder repr
- Set **Minimum Android version** to _Android 5.0 (API level 21)_. 1. Double-click `Platforms/Android/MainActivity.cs` file in the **Solution Explorer** pane to open the csharp editor. Replace the content of the file with following code:
-
+ :::code language="csharp" source="~/ms-identity-ciam-dotnet-tutorial/1-Authentication/2-sign-in-maui/Platforms/Android/MainActivity.cs" :::
-
- Let's break down the key parts of the code you have added:
-
- - The necessary `using` statements are included at the top.
- - The `MainActivity` class is defined, inheriting from `MauiAppCompatActivity`, which is the base class for the Android platform in .NET MAUI.
- - The [Activity] attribute is applied to the `MainActivity` class, specifying various settings for the Android activity.
- - `Theme = "@style/Maui.SplashTheme"` sets the splash theme for the activity.
- - `MainLauncher = true` designates this activity as the main entry point of the application.
- - `ConfigurationChanges` specifies the configuration changes that the activity can handle, such as _screen size_, _orientation_, _UI mode_, _screen layout_, _smallest screen size_, and _density_.
- - `OnCreate` method is overridden to provide custom logic when the activity is being created.
- - `base.OnCreate(savedInstanceState)` calls the base implementation of the method.
- - `PlatformConfig.Instance.RedirectUri` is set to a dynamically generated value based on `PublicClientSingleton.Instance.MSALClientHelper.AzureAdConfig.ClientId`. It configures the redirect URI for the MSAL client.
- - `PlatformConfig.Instance.ParentWindow` is set to the current activity instance, which specifies the parent window for authentication-related operations.
- - `PublicClientSingleton.Instance.MSALClientHelper.InitializePublicClientAppAsync()` initializes the MSAL client app asynchronously using a helper method from a singleton instance called `MSALClientHelper`. The `Task.Run` is used to execute the initialization on a background thread, and `.Result` is used to synchronously wait for the task to complete.
- - `OnActivityResult` method is overridden to handle the result of an activity launched by the current activity.
- - `base.OnActivityResult(requestCode, resultCode, data)` calls the base implementation of the method.
- - `AuthenticationContinuationHelper.SetAuthenticationContinuationEventArgs(requestCode, resultCode, data)` sets the authentication continuation event arguments based on the received request code, result code, and intent data. This is used to continue the authentication flow after an external activity returns a result.
+
+ Let's break down the key parts of the code you have added:
+
+ - The necessary `using` statements are included at the top.
+ - The `MainActivity` class is defined, inheriting from `MauiAppCompatActivity`, which is the base class for the Android platform in .NET MAUI.
+ - The [Activity] attribute is applied to the `MainActivity` class, specifying various settings for the Android activity.
+ - `Theme = "@style/Maui.SplashTheme"` sets the splash theme for the activity.
+ - `MainLauncher = true` designates this activity as the main entry point of the application.
+ - `ConfigurationChanges` specifies the configuration changes that the activity can handle, such as _screen size_, _orientation_, _UI mode_, _screen layout_, _smallest screen size_, and _density_.
+ - `OnCreate` method is overridden to provide custom logic when the activity is being created.
+ - `base.OnCreate(savedInstanceState)` calls the base implementation of the method.
+ - `PlatformConfig.Instance.RedirectUri` is set to a dynamically generated value based on `PublicClientSingleton.Instance.MSALClientHelper.AzureAdConfig.ClientId`. It configures the redirect URI for the MSAL client.
+ - `PlatformConfig.Instance.ParentWindow` is set to the current activity instance, which specifies the parent window for authentication-related operations.
+ - `PublicClientSingleton.Instance.MSALClientHelper.InitializePublicClientAppAsync()` initializes the MSAL client app asynchronously using a helper method from a singleton instance called `MSALClientHelper`. The `Task.Run` is used to execute the initialization on a background thread, and `.Result` is used to synchronously wait for the task to complete.
+ - `OnActivityResult` method is overridden to handle the result of an activity launched by the current activity.
+ - `base.OnActivityResult(requestCode, resultCode, data)` calls the base implementation of the method.
+ - `AuthenticationContinuationHelper.SetAuthenticationContinuationEventArgs(requestCode, resultCode, data)` sets the authentication continuation event arguments based on the received request code, result code, and intent data. This is used to continue the authentication flow after an external activity returns a result.
1. In the **Solution Explorer** pane of Visual Studio, select **Platforms**. 1. Right-click on the **Android** folder > **Add** > **New Item...**.
To create `appsettings.json`, follow these steps:
Set the **Debug Target** in the Visual Studio toolbar to the device you want to debug and test with. The following steps demonstrate setting the **Debug Target** to _Android_: 1. Select **Debug Target** drop-down.
-1. Select **Android Emulators**.
+1. Select **Android Emulators**.
1. Select emulator device. Run the app by pressing _F5_ or select the _play button_ at the top of Visual Studio.
active-directory Tutorial Mobile Maui Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-mobile-maui-role-based-access-control.md
+
+ Title: "Tutorial: Use role-based access control in your .NET MAUI app"
+description: This tutorial demonstrates how to add app roles to .NET Multi-platform App UI (.NET MAUI) and receive them in the ID token.
+++++++ Last updated : 07/17/2023++
+# Tutorial: Use role-based access control in your .NET MAUI app
+
+This tutorial demonstrates how to add app roles to .NET Multi-platform App UI (.NET MAUI) and receive them in the ID token.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Access the roles in the ID token.
+
+## Prerequisites
+
+- [Tutorial: Sign in users in .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-sign-out.md)
+- [Using role-based access control for applications](how-to-use-app-roles-customers.md)
+
+## Receive groups and roles claims in .NET MAUI
+
+Once you configure your customer's tenant, you can retrieve your roles and groups claims in your client app. The roles and groups claims are both present in the ID token and the access token. Access tokens are only validated in the web APIs for which they were acquired by a client. The client shouldn't validate access tokens.
+
+The .NET MAUI needs to check for the app roles claims in the ID token to implement authorization in the client side.
+
+In this tutorial series, you created a .NET MAUI app where you developed the [_ClaimsView.xaml.cs_](tutorial-mobile-app-maui-sign-in-sign-out.md#handle-the-claimsview-data) to handle `ClaimsView` data. In this file, we inspect the contents of ID tokens.
+
+To access the role claim, you can modify the code snippet as follows:
+
+```csharp
+var idToken = PublicClientSingleton.Instance.MSALClientHelper.AuthResult.IdToken;
+var handler = new JwtSecurityTokenHandler();
+var token = handler.ReadJwtToken(idToken);
+// Get the role claim value
+var roleClaim = token.Claims.FirstOrDefault(c => c.Type == "roles")?.Value;
+
+if (!string.IsNullOrEmpty(roleClaim))
+{
+ // If the role claim exists, add it to the IdTokenClaims
+ IdTokenClaims = new List<string> { roleClaim };
+}
+else
+{
+ // If the role claim doesn't exist, add a message indicating that no role claim was found
+ IdTokenClaims = new List<string> { "No role claim found in ID token" };
+}
+
+Claims.ItemsSource = IdTokenClaims;
+```
+
+> [!NOTE]
+> To read the ID token, you must install the `System.IdentityModel.Tokens.Jwt` package.
+
+If you assign a user to multiple roles, the roles string contains all roles separated by a comma, such as `Orders.Manager, Store.Manager,...`. Make sure you build your application to handle the following conditions:
+
+- Absence of roles claims in the token
+- User hasn't been assigned to any role
+- Multiple values in the roles claim when you assign a user to multiple roles
+
+When you define app roles for your app, it is your responsibility to implement authorization logic for those roles.
+
+## Next steps
+
+For more information about group claims and making informed decisions regarding the usage of app roles or groups, see:
+
+- [Configuring group claims and app roles in tokens](/security/zero-trust/develop/configure-tokens-group-claims-app-roles)
+- [Choose an approach](../../develop/custom-rbac-for-developers.md#choose-an-approach)
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
# Add Facebook as an identity provider for External Identities
+> [!TIP]
+> This article describes adding Facebook as an identity provider for B2B collaboration. If your tenant is configured for customer identity and access management, see [Add Facebook as an identity provider](customers/how-to-facebook-federation-customers.md) for customers.
+ You can add Facebook to your self-service sign-up user flows so that users can sign in to your applications using their own Facebook accounts. To allow users to sign in using Facebook, you'll first need to [enable self-service sign-up](self-service-sign-up-user-flow.md) for your tenant. After you add Facebook as an identity provider, set up a user flow for the application and select Facebook as one of the sign-in options. After you've added Facebook as one of your application's sign-in options, on the **Sign in** page, a user can simply enter the email they use to sign in to Facebook, or they can select **Sign-in options** and choose **Sign in with Facebook**. In either case, they'll be redirected to the Facebook sign in page for authentication.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
# Add Google as an identity provider for B2B guest users
+> [!TIP]
+> This article describes adding Google as an identity provider for B2B collaboration. If your tenant is configured for customer identity and access management, see [Add Google as an identity provider](customers/how-to-google-federation-customers.md) for customers.
+ By setting up federation with Google, you can allow invited users to sign in to your shared apps and resources with their own Gmail accounts, without having to create Microsoft accounts. After you've added Google as one of your application's sign-in options, on the **Sign in** page, a user can simply enter the Gmail address they use to sign in to Google. ![Sign in options for Google users](media/google-federation/sign-in-with-google-overview.png)
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/identity-providers.md
# Identity Providers for External Identities
+> [!TIP]
+> This article applies to B2B collaboration identity providers. If your tenant is configured for customer identity and access management, see [Authentication methods and identity providers for customers](customers/concept-authentication-methods-customers.md).
+ An *identity provider* creates, maintains, and manages identity information while providing authentication services to applications. When sharing your apps and resources with external users, Azure AD is the default identity provider for sharing. This means when you invite external users who already have an Azure AD or Microsoft account, they can automatically sign in without further configuration on your part. External Identities offers a variety of identity providers.
active-directory Self Service Sign Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-overview.md
Previously updated : 09/28/2022 Last updated : 08/14/2023
# Self-service sign-up
-When sharing an application with external users, you might not always know in advance who will need access to the application. As an alternative to sending invitations directly to individuals, you can allow external users to sign up for specific applications themselves by enabling [self-service sign-up user flow](self-service-sign-up-user-flow.md). You can create a personalized sign-up experience by customizing the self-service sign-up user flow. For example, you can provide options to sign up with Azure AD or social identity providers and collect information about the user during the sign-up process.
+Self-service sign-up is an essential feature for your External ID workforce and customer scenarios. It gives your partners, consumers, and other external users a frictionless way to sign up and get access to your apps without any intervention on your part.
+
+- In a B2B collaboration scenario, you might not always know in advance who will need access to an application you want to share. As an alternative to sending invitations directly to individuals, you can allow external users to sign up for specific applications themselves. Learn how to [create a self-service sign-up user flow for B2B collaboration](self-service-sign-up-user-flow.md).
+- In a customer identity and access management (CIAM) scenario, it's important to add a self-service sign-up experience to the apps you build for consumers. You can do so by configuring self-service sign-up user flows. Learn more about [planning the customer experience](customers/concept-planning-your-solution.md) or [creating a sign-up and sign-in user flow for customers](customers/how-to-user-flow-sign-up-sign-in-customers.md).
+
+In either scenario, you can create a personalized sign-up experience by customizing the look and feel, providing sign-in with social identity providers, and collecting information about the user during the sign-up process.
> [!NOTE] > You can associate user flows with apps built by your organization. User flows can't be used for Microsoft apps, like SharePoint or Teams. ## User flow for self-service sign-up
-A self-service sign-up user flow creates a sign-up experience for your external users through the application you want to share. The user flow can be associated with one or more of your applications. First you'll enable self-service sign-up for your tenant and federate with the identity providers you want to allow external users to use for sign-in. Then you'll create and customize the sign-up user flow and assign your applications to it.
-You can configure user flow settings to control how the user signs up for the application:
+A self-service sign-up user flow creates a sign-up experience for the application you're providing to external users. You can configure user flow settings to control how the user signs up for the application:
- Account types used for sign-in, such as social accounts like Facebook, or Azure AD accounts - Attributes to be collected from the user signing up, such as first name, postal code, or country/region of residency
-The user can sign in to your application, via the web, mobile, desktop, or single-page application (SPA). The application initiates an authorization request to the user flow-provided endpoint. The user flow defines and controls the user's experience. When the user completes the sign-up user flow, Azure AD generates a token and redirects the user back to your application. Upon completion of sign-up, a guest account is provisioned for the user in the directory. Multiple applications can use the same user flow.
+The user can sign in to your application, via the web, mobile, desktop, or single-page application (SPA). The application initiates an authorization request to the user flow-provided endpoint. The user flow defines and controls the user's experience. When the user completes the sign-up user flow, Azure AD generates a token and redirects the user back to your application. Upon completion of sign-up, an account is provisioned for the user in the directory. Multiple applications can use the same user flow.
## Example of self-service sign-up
-The following example illustrates how we're bringing social identity providers to Azure AD with self-service sign-up capabilities for guest users.
-A partner of Woodgrove opens the Woodgrove app. They decide they want to sign up for a supplier account, so they select Request your supplier account, which initiates the self-service sign-up flow.
+The following B2B collaboration example illustrates self-service sign-up capabilities for guest users. A partner of Woodgrove opens the Woodgrove app. They decide they want to sign up for a supplier account, so they select Request your supplier account, which initiates the self-service sign-up flow.
![Example of self-service sign-up starting page](media/self-service-sign-up-overview/example-start-sign-up-flow.png)
The user enters the information, continues the sign-up flow, and gets access to
## Next steps
- For details, see how to [add self-service sign-up to an app](self-service-sign-up-user-flow.md).
+User flows for B2B collaboration:
+
+- [Create a self-service sign-up user flow for B2B collaboration](self-service-sign-up-user-flow.md)
+
+User flows for customer identity and access management (CIAM):
+
+- [Plan a sign-up experience for customers or consumers](customers/concept-planning-your-solution.md)
+- [Create a sign-up and sign-in user flow for customers or consumers](customers/how-to-user-flow-sign-up-sign-in-customers.md).
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
# Add a self-service sign-up user flow to an app
+> [!TIP]
+> This article applies to B2B collaboration user flows. If your tenant is configured for customer identity and access management, see [Create a sign-up and sign-in user flow for customers](customers/how-to-user-flow-sign-up-sign-in-customers.md).
+ For applications you build, you can create user flows that allow a user to sign up for an app and create a new guest account. A self-service sign-up user flow defines the series of steps the user will follow during sign-up, the [identity providers](identity-providers.md) you'll allow them to use, and the user attributes you want to collect. You can associate one or more applications with a single user flow. > [!NOTE]
active-directory User Flow Add Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md
# Define custom attributes for user flows
+> [!TIP]
+> This article applies to B2B collaboration user flows. If your tenant is configured for customer identity and access management, see [Collect user attributes during sign-up](customers/how-to-define-custom-attributes.md) for customers.
+ For each application, you might have different requirements for the information you want to collect during sign-up. Azure AD comes with a built-in set of information stored in attributes, such as Given Name, Surname, City, and Postal Code. With Azure AD, you can extend the set of attributes stored on a guest account when the external user signs up through a user flow. You can create custom attributes in the Azure portal and use them in your [self-service sign-up user flows](self-service-sign-up-user-flow.md). You can also read and write these attributes by using the [Microsoft Graph API](../../active-directory-b2c/microsoft-graph-operations.md). Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_<extensions-app-id>_attributename`. For example:
active-directory User Flow Customize Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-customize-language.md
# Language customization in Azure Active Directory
+> [!TIP]
+> This article applies to B2B collaboration user flows. If your tenant is configured for customer identity and access management, see [Customize the language of the authentication experience](customers/how-to-customize-languages-customers.md) for customers.
+ Language customization in Azure Active Directory (Azure AD) allows your user flow to accommodate different languages to suit your user's needs. Microsoft provides the translations for [36 languages](#supported-languages). In this article, you'll learn how to customize the attribute names on the [attribute collection page](self-service-sign-up-user-flow.md#select-the-layout-of-the-attribute-collection-form), even if your experience is provided for only a single language. ## How language customization works
active-directory Concept Support Access Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-support-access-requests.md
+
+ Title: Support access requests in Microsoft Entra ID
+description: Learn how Microsoft Support engineers can access identity diagnostic information in Microsoft Entra ID.
+++++++++ Last updated : 07/31/2023+++
+# About Microsoft Support access requests (preview)
+
+Microsoft Support requests are automatically assigned to a support engineer with expertise in solving similar problems. To expedite solution delivery, our support engineers use diagnostic tooling to read [identity diagnostic data](/troubleshoot/azure/active-directory/support-data-collection-diagnostic-logs) for your tenant.
+
+Microsoft Support's access to your identity diagnostic data is granted only with your approval, is read-only, and lasts only as long as we are actively working with you to solve your problem.
+
+For many support requests created in the Microsoft Entra admin center, you can manage the access to your identity diagnostic data by enabling the "Allow collection of advanced diagnostic information" property. If this setting is set to "no" our support engineers must ask *you* to collect the data needed to solve your problem, which could slow down your problem resolution.
+
+## Microsoft Support access requests
+
+Sometimes support engineers need additional approval from you to access identity diagnostic data to solve your problem. For example, if a support engineer needs to access identity diagnostic data in a different Microsoft Entra tenant than the one in which you created the support request, the engineer must ask you to grant them access to that data.
+
+Microsoft Support access requests (preview) enable you to manage Microsoft Support's access to your identity diagnostic data for support requests where you cannot manage that access in the Microsoft Entra admin center's support request management experience.
+
+## Support access role permissions
+
+To manage Microsoft Support access requests, you must be assigned to a role that has full permission to manage Microsoft Entra support tickets for the tenant. This role permission is included in Azure Active Directory (Azure AD) built-in roles with the action `microsoft.azure.supportTickets/allEntities/allTasks`. You can see which Azure AD roles have this permission in the [Azure AD built-in roles](../roles/permissions-reference.md) article.
+
+Azure Active Directory is being renamed to Microsoft Entra ID. For more information see [New name for Azure Active Directory](../fundamentals/new-name.md).
+
+## Next steps
+
+- [Approve Microsoft Support access requests](how-to-approve-support-access-requests.md)
+- [Manage Microsoft Support access requests](how-to-manage-support-access-requests.md)
+- [View Microsoft Support access request logs](how-to-view-support-access-request-logs.md)
+- [Learn how Microsoft uses data for Azure support](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
The following examples show how to assign a custom security attribute role to a
# [PowerShell](#tab/ms-powershell)
-[New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.devicemanagement.enrolment/new-mgrolemanagementdirectoryroleassignment)
+[New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.identity.governance/new-mgrolemanagementdirectoryroleassignment?view=graph-powershell-1.0)
```powershell $roleDefinitionId = "58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d"
The following examples show how to assign a custom security attribute role to a
# [PowerShell](#tab/ms-powershell)
-[New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.devicemanagement.enrolment/new-mgrolemanagementdirectoryroleassignment)
+[New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.identity.governance/new-mgrolemanagementdirectoryroleassignment?view=graph-powershell-1.0)
```powershell $roleDefinitionId = "58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d"
active-directory How To Approve Support Access Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-approve-support-access-requests.md
+
+ Title: Approve Microsoft Support access requests (preview)
+description: How to approve Microsoft Support access requests to Azure Active Directory identity data
+++++++++ Last updated : 08/10/2023+++
+# Approving Microsoft Support access requests (preview)
+
+In many situations, enabling the collection of **Advanced diagnostic information** during the creation of a support access request is sufficient for Microsoft Support to troubleshoot your issue. In some situations though, a separate approval may be needed to allow Microsoft Support to access your identity diagnostic data.
+
+Microsoft Support access requests (preview) enable you to [give Microsoft Support engineers access to diagnostic data](concept-support-access-requests.md) in your identity service to help solve support requests you submitted to Microsoft. You can use the Microsoft Entra admin center and the Azure Active Directory (Azure AD) portal to manage Microsoft Support access requests (preview).
+
+This article describes how the process works and how to approve Microsoft Support access requests.
+
+## Prerequisites
+
+Only authorized users in your tenant can view and manage Microsoft Support access requests. To view, approve, and reject Microsoft Support access requests, a role must have the permission `microsoft.azure.supportTickets/allEntities/allTasks`. To see which Azure AD roles have this permission, search the [Azure AD built-in roles](../roles/permissions-reference.md) for the required permission.
+
+## Scenarios and workflow
+
+A support access request may be needed when a support request is submitted to Microsoft Support from a tenant that is different from the tenant where the issue is occurring. This scenario is known as a *cross-tenant* scenario. The *resource tenant* is the tenant where the issue is occurring and the tenant where the support request was created is known as the *support request tenant*.
+
+Let's take a closer look at the workflow for this scenario:
+
+- A support request is submitted from a tenant that is different from the tenant where the issue is occurring.
+- A Microsoft Support engineer creates a support access request to access identity diagnostic data for the *resource tenant*.
+- An administrator of *both* tenants approves the Microsoft Support access request.
+- With approval, the support engineer has access to the data only in the approved *resource tenant*.
+- When the support engineer closes the support request, access to your identity data is automatically revoked.
+
+This cross-tenant scenario is the primary scenario where a support access request is necessary. In these scenarios, Microsoft approved access is visible only in the resource tenant. To preserve cross-tenant privacy, an administrator of the *support request tenant* is unable to see whether an administrator of the *resource tenant* has manually removed this approval.
+
+## View pending requests
+
+When you have a pending support access request, you can view and approve that request from a couple places.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) and navigate to **Diagnose and solve problems**.
+
+1. Select the link from the banner message at the top of the page.
+
+ ![Screenshot of the Diagnose and solve problems page with the banner notification highlighted.](media/how-to-approve-support-access-requests/diagnose-solve-problems-banner.png)
+
+ Or scroll to the bottom of the page and select **Manage pending requests** from the **Microsoft Support Access Requests** section.
+
+ :::image type="content" source="media/how-to-approve-support-access-requests/diagnose-solve-problems-access-requests.png" alt-text="Screenshot of the Diagnose and solve problems page with the Manage pending requests link highlighted." lightbox="media/how-to-approve-support-access-requests/diagnose-solve-problems-access-requests-expanded.png":::
+
+1. Select either the **Support request ID** link or **Review for approval** link for the request you need to approve.
+
+ ![Screenshot of the pending request with links to view details highlighted.](media/how-to-approve-support-access-requests/pending-request-view-details-links.png)
+
+## Approve or reject a support request
+
+When viewing the details of a pending support access request, you can approve or reject the request.
+
+- To approve the support access request, select the **Approve** button.
+ - Microsoft Support now has *read-only* access to your identity diagnostic data until your support request is completed.
+- To reject the support access request, select the **Reject** button.
+ - Microsoft Support does *not* have access to your identity diagnostic data.
+ - A message appears, indicating this choice may result in slower resolution of your support request.
+ - Your support engineer may ask you for data needed to diagnose the issue, and you must collect and provide that information to your support engineer.
+
+![Screenshot of the Support Access requests details page with the Reject and Approve buttons highlighted.](media/how-to-approve-support-access-requests/pending-request-details.png)
+
+## Next steps
+
+- [How to create a support request](how-to-get-support.md)
+- [Manage Microsoft Support access requests](how-to-manage-support-access-requests.md)
+- [View Microsoft Support access request logs](how-to-view-support-access-request-logs.md)
+- [Learn how Microsoft uses data for Azure support](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
active-directory How To Manage Support Access Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-support-access-requests.md
+
+ Title: Manage Microsoft Support access requests (preview)
+description: How to view and control support access requests to Azure Active Directory identity data
+++++++++ Last updated : 08/10/2023+++
+# Manage Microsoft Support access requests (preview)
+
+You can use the Microsoft Entra admin center and the Azure Active Directory (Azure AD) portal to manage Microsoft Support access requests (preview). Microsoft Support access requests enable you to [give Microsoft Support engineers access to identity diagnostic data](concept-support-access-requests.md) in your identity service to help solve support requests you submitted to Microsoft.
+
+## Prerequisites
+
+Only certain Azure AD roles are authorized to manage Microsoft Support access requests. To manage Microsoft Support access requests, a role must have the permission `microsoft.azure.supportTickets/allEntities/allTasks`. To see which Azure AD roles have this permission, search the [Azure AD built-in roles](../roles/permissions-reference.md) for the required permission.
+
+## View support access requests
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) and navigate to **Diagnose and solve problems**.
+
+1. Scroll to the bottom of the page and select **Approved access** from the **Microsoft Support Access Requests** section.
+
+ :::image type="content" source="media/how-to-manage-support-access-requests/diagnose-solve-problems-access-requests.png" alt-text="Screenshot of the Diagnose and solve problems page with the Manage pending requests link highlighted." lightbox="media/how-to-manage-support-access-requests/diagnose-solve-problems-access-requests-expanded.png":::
+
+1. Select the **Support request ID** link for the request you need to approve.
+
+ ![Screenshot of the pending request with links to view details highlighted.](media/how-to-manage-support-access-requests/approved-access.png)
+
+## Revoke access to an approved support access request
+
+Closing a support request automatically revokes the support engineer's access to your identity diagnostic data. You can manually revoke Microsoft Support's access to identity diagnostic data for the support request *before* your support request is closed.
+
+Select the **Remove access** button to revoke access to an approved support access request.
+
+![Screenshot of the Support access requests history with the Revoke button highlighted.](media/how-to-manage-support-access-requests/remove-approved-access.png)
+
+When your support request is closed, the status of an approved Microsoft Support access request is automatically set to **Completed.** Microsoft Support access requests remain in the **Approved access** list for 30 days.
+
+## Next steps
+
+- [Approve Microsoft Support access requests](how-to-approve-support-access-requests.md)
+- [View Microsoft Support access request logs](how-to-view-support-access-request-logs.md)
+- [Learn how Microsoft uses data for Azure support](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
active-directory How To View Support Access Request Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-view-support-access-request-logs.md
+
+ Title: View activity logs for Microsoft Support access requests (preview)
+description: How to view activity logs for Microsoft Support access requests.
+++++++++ Last updated : 08/10/2023+++
+# View activity logs for Microsoft Support access requests (preview)
+
+All activities related to Microsoft Support access requests are included in the Microsoft Entra ID audit logs. Activities can include requests from users in your tenant or an automated service. This article describes how to view the different types of activity logs.
+
+## Prerequisites
+
+To access the audit logs for a tenant, you must have one of the following roles:
+
+- Reports Reader
+- Security Reader
+- Security Administrator
+- Global Administrator
+
+## How to access the logs
+
+You can access a filtered view of audit logs for your tenant from the Microsoft Support access requests area. Select **Audit logs** from the side menu to view the audit logs with the category pre-selected.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) and navigate to **Diagnose and solve problems**.
+
+1. Scroll to the bottom of the page and select **Manage pending requests** from the **Microsoft Support Access Requests** section.
+
+1. Select **Audit logs** from the side menu.
+
+You can also access these logs from the Microsoft Entra ID Audit logs. Select **Core Directory** as the service and `MicrosoftSupportAccessManagement` as the category.
+
+## Types of requests
+
+There are some details associated with support access request audit logs that are helpful to understand. Knowing the difference between the types of request may help when exploring the logs.
+
+Activity logs for Microsoft Support access requests fall into two categories: user-initiated activities, and automated activities.
+
+### User-initiated activities
+
+There are three user-initiated activities that you can see in your Azure AD audit logs. These are actions requested by administrators of your tenant.
+
+- Approval of a Microsoft Support access request
+- Rejection of a Microsoft Support access request
+- Manual removal of Microsoft Support access before your support request is closed
+
+### Automated requests
+
+There are three activities that can be associated with an automated or system-initiated Microsoft Support access request:
+
+- Creation of a Microsoft Support access *request* in the support request tenant
+- Creation of a Microsoft Support access *approval* in the resource tenant. This is done automatically after a Microsoft Support access request is approved by a user who is an administrator of both the support request tenant, and the resource tenant
+- Removal of Microsoft Support access upon closure of your support request
+
+## Next steps
+
+- [Manage Microsoft Support access requests](how-to-manage-support-access-requests.md)
+- [Learn about audit logs](../../active-directory/reports-monitoring/concept-audit-logs.md)
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-configure.md
Previously updated : 01/20/2023 Last updated : 08/14/2023
You can scope the agent to synchronize specific users and groups by using on-pre
:::image type="content" source="media/how-to-configure/new-ux-configure-4.png" alt-text="Screenshot of scoping filters icon." lightbox="media/how-to-configure/new-ux-configure-4.png":::
-You can't configure groups and organizational units within a configuration.
+You can configure groups and organizational units within a configuration.
>[!NOTE] > You cannot use nested groups with group scoping. Nested objects beyond the first level will not be included when scoping using security groups. Only use group scope filtering for pilot scenarios as there are limitations to syncing large groups.
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md
To support temporary passwords in Azure AD for synchronized users, you can enabl
> If the user has the option "Password never expires" set in Active Directory (AD), the force password change flag will not be set in Active Directory (AD), so the user will not be prompted to change the password during the next sign-in. > > A new user created in Active Directory with "User must change password at next logon" flag will always be provisioned in Azure AD with a password policy to "Force change password on next sign-in", irrespective of the *ForcePasswordChangeOnLogOn* feature being true or false. This is an Azure AD internal logic since the new user is provisioned without a password, whereas *ForcePasswordChangeOnLogOn* feature only affects admin password reset scenarios.
+>
+> If a user was created in Active Directory with "User must change password at next logon" before the feature was enabled, the user will receive an error while signing in. To remediate this issue, un-check and re-check the field "User must change password at next logon" in Active Directory Users and Computers. After synchronizing the user object changes, the user will receive the expected prompt in Azure AD to update their password.
> [!CAUTION] > You should only use this feature when SSPR and Password Writeback are enabled on the tenant. This is so that if a user changes their password via SSPR, it will be synchronized to Active Directory.
active-directory Plan Hybrid Identity Design Considerations Accesscontrol Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-accesscontrol-requirements.md
- Title: Hybrid identity design access control requirements Azure
-description: Covers the pillars of identity, and identifying access requirements for resources for users in a hybrid environment.
------ Previously updated : 01/26/2023-----
-# Determine access control requirements for your hybrid identity solution
-When an organization is designing their hybrid identity solution, they can also use this opportunity to review access requirements for the resources that they are planning to make it available for users. The data access cross all four pillars of identity, which are:
-
-* Administration
-* Authentication
-* Authorization
-* Auditing
-
-The sections that follow will cover authentication and authorization in more details, administration, and auditing are part of the hybrid identity lifecycle. Read [Determine hybrid identity management tasks](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md) for more information about these capabilities.
-
-> [!NOTE]
-> Read [The Four Pillars of Identity - Identity Management in the Age of Hybrid IT](https://social.technet.microsoft.com/wiki/contents/articles/15530.the-four-pillars-of-identity-identity-management-in-the-age-of-hybrid-it.aspx) for more information about each one of those pillars.
->
->
-
-## Authentication and authorization
-There are different scenarios for authentication and authorization, these scenarios will have specific requirements that must be fulfilled by the hybrid identity solution that the company is going to adopt. Scenarios involving Business to Business (B2B) communication can add an extra challenge for IT Admins since they will need to ensure that the authentication and authorization method used by the organization can communicate with their business partners. During the designing process for authentication and authorization requirements, ensure that the following questions are answered:
-
-* Will your organization authenticate and authorize only users located at their identity management system?
- * Are there any plans for B2B scenarios?
- * If yes, do you already know which protocols (SAML, OAuth, Kerberos, or Certificates) will be used to connect both businesses?
-* Does the hybrid identity solution that you are going to adopt support those protocols?
-
-Another important point to consider is where the authentication repository that will be used by users and partners will be located and the administrative model to be used. Consider the following two core options:
-
-* Centralized: in this model, the userΓÇÖs credentials, policies and administration can be centralized on-premises or in the cloud.
-* Hybrid: in this model, the userΓÇÖs credentials, policies and administration will be centralized on-premises and a replicated in the cloud.
-
-Which model your organization will adopt will vary according to their business requirements, you want to answer the following questions to identify where the identity management system will reside and the administrative mode to use:
-
-* Does your organization currently have an identity management on-premises?
- * If yes, do they plan to keep it?
- * Are there any regulation or compliance requirements that your organization must follow that dictates where the identity management system should reside?
-* Does your organization use single sign-on for apps located on-premises or in the cloud?
- * If yes, does the adoption of a hybrid identity model affect this process?
-
-## Access Control
-While authentication and authorization are core elements to enable access to corporate data through userΓÇÖs validation, it is also important to control the level of access that these users will have and the level of access administrators will have over the resources that they are managing. Your hybrid identity solution must be able to provide granular access to resources, delegation, and role base access control. Ensure that the following question is answered regarding access control:
-
-* Does your company have more than one user with elevated privilege to manage your identity system?
- * If yes, does each user need the same access level?
-* Would your company need to delegate access to users to manage specific resources?
- * If yes, how frequently this happens?
-* Would your company need to integrate access control capabilities between on-premises and cloud resources?
-* Would your company need to limit access to resources according to some conditions?
-* Would your company have any application that needs custom control access to some resources?
- * If yes, where are those apps located (on-premises or in the cloud)?
- * If yes, where are those target resources located (on-premises or in the cloud)?
-
-> [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Define Data Protection Strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) will go over the options available and advantages/disadvantages of each option. By answering those questions you will select which option best suits your business needs.
->
->
-
-## Next steps
-[Determine incident response requirements](plan-hybrid-identity-design-considerations-incident-response-requirements.md)
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
-
active-directory Plan Hybrid Identity Design Considerations Business Needs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-business-needs.md
- Title: Identity requirements for hybrid cloud identity design Azure
-description: Identify the companyΓÇÖs business needs that will lead you to define the requirements for the hybrid identity design.
------- Previously updated : 01/26/2023-----
-# Determine identity requirements for your hybrid identity solution
-The first step in designing a hybrid identity solution is to determine the requirements for the business organization that will be leveraging this solution. Hybrid identity starts as a supporting role (it supports all other cloud solutions by providing authentication) and goes on to provide new and interesting capabilities that unlock new workloads for users. These workloads or services that you wish to adopt for your users will dictate the requirements for the hybrid identity design. These services and workloads need to leverage hybrid identity both on-premises and in the cloud.
-
-You need to go over these key aspects of the business to understand what it is a requirement now and what the company plans for the future. If you donΓÇÖt have the visibility of the long term strategy for hybrid identity design, chances are that your solution will not be scalable as the business needs grow and change. The diagram below shows an example of a hybrid identity architecture and the workloads that are being unlocked for users. This is just an example of all the new possibilities that can be unlocked and delivered with a solid hybrid identity strategy.
-
-Some components that are part of the hybrid identity architecture
-![hybrid identity architecture](./media/plan-hybrid-identity-design-considerations/hybrid-identity-architechture.png)
-
-## Determine business needs
-Each company will have different requirements, even if these companies are part of the same industry, the real business requirements might vary. You can still leverage best practices from the industry, but ultimately it is the companyΓÇÖs business needs that will lead you to define the requirements for the hybrid identity design.
-
-Make sure to answer the following questions to identify your business needs:
-
-* Is your company looking to cut IT operational cost?
-* Is your company looking to secure cloud assets (SaaS apps, infrastructure)?
-* Is your company looking to modernize your IT?
- * Are your users more mobile and demanding IT to create exceptions into your DMZ to allow different type of traffic to access different resources?
- * Does your company have legacy apps that needed to be published to these modern users but are not easy to rewrite?
- * Does your company need to accomplish all these tasks and bring it under control at the same time?
-* Is your company looking to secure usersΓÇÖ identities and reduce risk by bringing new tools that leverage the expertise of MicrosoftΓÇÖs Azure security expertise on-premises?
-* Is your company trying to get rid of the dreaded ΓÇ£externalΓÇ¥ accounts on premises and move them to the cloud where they are no longer a dormant threat inside your on-premises environment?
-
-## Analyze on-premises identity infrastructure
-Now that you have an idea regarding your company business requirements, you need to evaluate your on-premises identity infrastructure. This evaluation is important for defining the technical requirements to integrate your current identity solution to the cloud identity management system. Make sure to answer the following questions:
-
-* What authentication and authorization solution does your company use on-premises?
-* Does your company currently have any on-premises synchronization services?
-* Does your company use any third-party Identity Providers (IdP)?
-
-You also need to be aware of the cloud services that your company might have. Performing an assessment to understand the current integration with SaaS, IaaS or PaaS models in your environment is very important. Make sure to answer the following questions during this assessment:
-
-* Does your company have any integration with a cloud service provider?
-* If yes, which services are being used?
-* Is this integration currently in production or is it a pilot?
-
-> [!NOTE]
-> Cloud Discovery analyzes your traffic logs against the Microsoft Defender for Cloud Apps catalog of over 16,000 cloud apps that are ranked and scored based on more than 70 risk factors, to provide you with ongoing visibility into cloud use, Shadow IT, and the risk Shadow IT poses into your organization.To get started see [Set up Cloud Discovery](/cloud-app-security/set-up-cloud-discovery).
->
->
-
-## Evaluate identity integration requirements
-Next, you need to evaluate the identity integration requirements. This evaluation is important to define the technical requirements for how users will authenticate, how the organizationΓÇÖs presence will look in the cloud, how the organization will allow authorization and what the user experience is going to be. Make sure to answer the following questions:
-
-* Will your organization be using federation, standard authentication or both?
-* Is federation a requirement? Because of the following:
- * Kerberos-based SSO
- * Your company has an on-premises applications (either built in-house or 3rd party) that uses SAML or similar federation capabilities.
- * MFA via Smart Cards. RSA SecurID, etc.
- * Client access rules that address the questions below:
- 1. Can I block all external access to Microsoft 365 based on the IP address of the client?
- 2. Can I block all external access to Microsoft 365, except Exchange ActiveSync?
- 3. Can I block all external access to Microsoft 365, except for browser-based apps (OWA, SPO)
- 4. Can I block all external access to Microsoft 365 for members of designated AD groups
-* Security/auditing concerns
-* Already existing investment in federated authentication
-* What name will our organization use for our domain in the cloud?
-* Does the organization have a custom domain?
- 1. Is that domain public and easily verifiable via DNS?
- 2. If it is not, then do you have a public domain that can be used to register an alternate UPN in AD?
-* Are the user identifiers consistent for cloud representation?
-* Does the organization have apps that require integration with cloud services?
-* Does the organization have multiple domains and will they all use standard or federated authentication?
-
-## Evaluate applications that run in your environment
-Now that you have an idea regarding your on-premises and cloud infrastructure, you need to evaluate the applications that run in these environments. This evaluation is important to define the technical requirements to integrate these applications to the cloud identity management system. Make sure to answer the following questions:
-
-* Where will our applications live?
-* Will users be accessing on-premises applications? In the cloud? Or both?
-* Are there plans to take the existing application workloads and move them to the cloud?
-* Are there plans to develop new applications that will reside either on-premises or in the cloud that will use cloud authentication?
-
-## Evaluate user requirements
-You also have to evaluate the user requirements. This evaluation is important to define the steps that will be needed for on-boarding and assisting users as they transition to the cloud. Make sure to answer the following questions:
-
-* Will users be accessing applications on-premises?
-* Will users be accessing applications in the cloud?
-* How do users typically login to their on-premises environment?
-* How will users sign-in to the cloud?
-
-> [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Determine incident response requirements](plan-hybrid-identity-design-considerations-incident-response-requirements.md) will go over the options available and pros/cons of each option. By having answered those questions you will select which option best suits your business needs.
->
->
-
-## Next steps
-[Determine directory synchronization requirements](plan-hybrid-identity-design-considerations-directory-sync-requirements.md)
-
-## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Contentmgt Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-contentmgt-requirements.md
- Title: Hybrid identity design - content management requirements Azure
-description: Provides insight into how to determine the content management requirements of your business. Usually when a user has their own device, they might also have multiple credentials that will be alternating according to the application that they use. It is important to differentiate what content was created using personal credentials versus the ones created using corporate credentials. Your identity solution should be able to interact with cloud services to provide a seamless experience to the end user while ensure their privacy and increase the protection against data leakage.
------ Previously updated : 01/26/2023-----
-# Determine content management requirements for your hybrid identity solution
-Understanding the content management requirements for your business may direct affect your decision on which hybrid identity solution to use. With the proliferation of multiple devices and the capability of users to bring their own devices ([BYOD](/mem/intune/fundamentals/byod-technology-decisions)), the company must protect its own data but it also must keep userΓÇÖs privacy intact. Usually when a user has their own device, they might also have multiple credentials that will be alternating according to the application that they use. It is important to differentiate what content was created using personal credentials versus the ones created using corporate credentials. Your identity solution should be able to interact with cloud services to provide a seamless experience to the end user while ensure their privacy and increase the protection against data leakage.
-
-Your identity solution will be leveraged by different technical controls in order to provide content management as shown in the figure below:
-
-![security controls](./media/plan-hybrid-identity-design-considerations/securitycontrols.png)
-
-**Security controls that will be leveraging your identity management system**
-
-In general, content management requirements will leverage your identity management system in the following areas:
-
-* Privacy: identifying the user that owns a resource and applying the appropriate controls to maintain integrity.
-* Data Classification: identify the user or group and level of access to an object according to its classification.
-* Data Leakage Protection: security controls responsible for protecting data to avoid leakage will need to interact with the identity system to validate the userΓÇÖs identity. This is also important for auditing trail purpose.
-
-> [!NOTE]
-> Read [data classification for cloud readiness](https://download.microsoft.com/download/0/A/3/0A3BE969-85C5-4DD2-83B6-366AA71D1FE3/Data-Classification-for-Cloud-Readiness.pdf) for more information about best practices and guidelines for data classification.
->
->
-
-When planning your hybrid identity solution ensure that the following questions are answered according to your organizationΓÇÖs requirements:
-
-* Does your company have security controls in place to enforce data privacy?
- * If yes, will the security controls be able to integrate with the hybrid identity solution that you are going to adopt?
-* Does your company use data classification?
- * If yes, is the current solution able to integrate with the hybrid identity solution that you are going to adopt?
-* Does your company currently have any solution for data leakage?
- * If yes, is the current solution able to integrate with the hybrid identity solution that you are going to adopt?
-* Does your company need to audit access to resources?
- * If yes, what type of resources?
- * If yes, what level of information is necessary?
- * If yes, where the audit log must reside? On-premises or in the cloud?
-* Does your company need to encrypt any emails that contain sensitive data (SSNs, credit card numbers, etc.)?
-* Does your company need to encrypt all documents/contents shared with external business partners?
-* Does your company need to enforce corporate policies on certain kinds of emails (do no reply all, do not forward)?
-
-> [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Define Data Protection Strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) will go over the options available and advantages/disadvantages of each option. By having answered those questions you will select which option best suits your business needs.
->
->
-
-## Next steps
-[Determine access control requirements](plan-hybrid-identity-design-considerations-accesscontrol-requirements.md)
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-data-protection-strategy.md
- Title: Hybrid identity design - data protection strategy Azure
-description: You define the data protection strategy for your hybrid identity solution to meet the business requirements that you defined.
------- Previously updated : 01/19/2023-----
-# Define data protection strategy for your hybrid identity solution
-In this task, youΓÇÖll define the data protection strategy for your hybrid identity solution to meet the business requirements that you defined in:
-
-* [Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md)
-* [Determine content management requirements](plan-hybrid-identity-design-considerations-contentmgt-requirements.md)
-* [Determine access control requirements](plan-hybrid-identity-design-considerations-accesscontrol-requirements.md)
-* [Determine incident response requirements](plan-hybrid-identity-design-considerations-incident-response-requirements.md)
-
-## Define data protection options
-As explained in [Determine directory synchronization requirements](plan-hybrid-identity-design-considerations-directory-sync-requirements.md), Microsoft Azure AD can synchronize with your on-premises Active Directory Domain Services (AD DS). This integration lets organizations use Azure AD to verify users' credentials when they are trying to access corporate resources. You can do this for both scenarios: data at rest on-premises and in the cloud. Access to data in Azure AD requires user authentication via a security token service (STS).
-
-Once authenticated, the user principal name (UPN) is read from the authentication token. Then, the authorization system determines the replicated partition and container corresponding to the userΓÇÖs domain. Information on the userΓÇÖs existence, enabled state, and role then helps the authorization system determine whether access to the target tenant is authorized for the user in that session. Certain authorized actions (specifically, create user and password reset) create an audit trail that a tenant administrator then uses to manage compliance efforts or investigations.
-
-Moving data from your on-premises datacenter into Azure Storage over an Internet connection may not always be feasible due to data volume, bandwidth availability, or other considerations. The [Azure Storage Import/Export Service](../../../import-export/storage-import-export-service.md) provides a hardware-based option for placing/retrieving large volumes of data in blob storage. It allows you to send [BitLocker-encrypted](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn306081(v=ws.11)#BKMK_BL2012R2) hard disk drives directly to an Azure datacenter where cloud operators upload the contents to your storage account, or they can download your Azure data to your drives to return to you. Only encrypted disks are accepted for this process (using a BitLocker key generated by the service itself during the job setup). The BitLocker key is provided to Azure separately, thus providing out of band key sharing.
-
-Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](../../../virtual-network/index.yml) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others.
-
-Depending on how you answered the questions in [Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md), you should be able to determine how you want to protect your data and how the hybrid identity solution can assist you with that process. The following table shows the options supported by Azure that are available for each data protection scenario.
-
-| Data protection options | At rest in the cloud | At rest on-premises | In transit |
-| | | | |
-| BitLocker Drive Encryption |X |X | |
-| SQL Server to encrypt databases |X |X | |
-| VM-to-VM Encryption | | |X |
-| SSL/TLS | | |X |
-| VPN | | |X |
-
-> [!NOTE]
-> Read [Compliance by Feature](https://azure.microsoft.com/support/trust-center/services/) at [Microsoft Azure Trust Center](https://azure.microsoft.com/support/trust-center/) to know more about the certifications that each Azure service is compliant with.
-> Since the options for data protection use a multilayer approach, comparison between those options are not applicable for this task. Ensure that you are leveraging all options available for each state of the data.
->
->
-
-## Define content management options
-
-One advantage of using Azure AD to manage a hybrid identity infrastructure is that the process is fully transparent from the end userΓÇÖs perspective. The user tries to access a shared resource, the resource requires authentication, the user has to send an authentication request to Azure AD in order to obtain the token and access the resource. This entire process happens in the background, without user interaction.
-
-Organizations that are concern about data privacy usually require data classification for their solution. If their current on-premises infrastructure is already using data classification, it is possible to use Azure AD as the main repository for the userΓÇÖs identity. A common tool that it is used on-premises for data classification is called [Data Classification Toolkit](/previous-versions/tn-archive/hh204743(v=technet.10)) for Windows Server 2012 R2. This tool can help to identify, classify, and protect data on file servers in your private cloud. It is also possible to use the [Automatic File Classification](/windows-server/identity/solution-guides/deploy-automatic-file-classification--demonstration-steps-) in Windows Server 2012 to accomplish this task.
-
-If your organization doesnΓÇÖt have data classification in place but needs to protect sensitive files without adding new Servers on-premises, they can use Microsoft [Azure Rights Management Service](/azure/information-protection/what-is-azure-rms). Azure RMS uses encryption, identity, and authorization policies to help secure your files and email, and it works across multiple devicesΓÇöphones, tablets, and PCs. Because Azure RMS is a cloud service, thereΓÇÖs no need to explicitly configure trusts with other organizations before you can share protected content with them. If they already have a Microsoft 365 or an Azure AD directory, collaboration across organizations is automatically supported. You can also synchronize just the directory attributes that Azure RMS needs to support a common identity for your on-premises Active Directory accounts, by using Azure Active Directory Synchronization Services (Azure AD Sync) or Azure AD Connect.
-
-A vital part of content management is to understand who is accessing which resource, therefore a rich logging capability is important for the identity management solution. Azure AD provides log over 30 days including:
-
-* Changes in role membership (ex: user added to Global Administrator role)
-* Credential updates (ex: password changes)
-* Domain management (ex: verifying a custom domain, removing a domain)
-* Adding or removing applications
-* User management (ex: adding, removing, updating a user)
-* Adding or removing licenses
-
-> [!NOTE]
-> Read [Microsoft Azure Security and Audit Log Management](https://download.microsoft.com/download/B/6/C/B6C0A98B-D34A-417C-826E-3EA28CDFC9DD/AzureSecurityandAuditLogManagement_11132014.pdf) to know more about logging capabilities in Azure.
-> Depending on how you answered the questions in [Determine content management requirements](plan-hybrid-identity-design-considerations-contentmgt-requirements.md), you should be able to determine how you want the content to be managed in your hybrid identity solution. While all options exposed in Table 6 are capable of integrating with Azure AD, it is important to define which is more appropriate for your business needs.
->
->
-
-| Content management options | Advantages | Disadvantages |
-| | | |
-| Centralized on-premises (Active Directory Rights Management Server) |Full control over the server infrastructure responsible for classifying the data <br> Built-in capability in Windows Server, no need for extra license or subscription <br> Can be integrated with Azure AD in a hybrid scenario <br> Supports information rights management (IRM) capabilities in Microsoft Online services such as Exchange Online and SharePoint Online, as well as Microsoft 365 <br> Supports on-premises Microsoft server products, such as Exchange Server, SharePoint Server, and file servers that run Windows Server and File Classification Infrastructure (FCI). |Higher maintenance (keep up with updates, configuration and potential upgrades), since IT owns the Server <br> Require a server infrastructure on-premises<br> DoesnΓÇÖt leverage Azure capabilities natively |
-| Centralized in the cloud (Azure RMS) |Easier to manage compared to the on-premises solution <br> Can be integrated with AD DS in a hybrid scenario <br> Fully integrated with Azure AD <br> DoesnΓÇÖt require a server on-premises in order to deploy the service <br> Supports on-premises Microsoft server products such as Exchange Server, SharePoint, Server, and file servers that run Windows Server and File Classification, Infrastructure (FCI) <br> IT, can have complete control over their tenantΓÇÖs key with BYOK capability. |Your organization must have a cloud subscription that supports RMS <br> Your organization must have an Azure AD directory to support user authentication for RMS |
-| Hybrid (Azure RMS integrated with, On-Premises Active Directory Rights Management Server) |This scenario accumulates the advantages of both, centralized on-premises and in the cloud. |Your organization must have a cloud subscription that supports RMS <br> Your organization must have an Azure AD directory to support user authentication for RMS, <br> Requires a connection between Azure cloud service and on-premises infrastructure |
-
-## Define access control options
-By leveraging the authentication, authorization and access control capabilities available in Azure AD you can enable your company to use a central identity repository while allowing users and partners to use single sign-on (SSO) as shown in the following figure:
-
-![centralized management](./media/plan-hybrid-identity-design-considerations/centralized-management.png)
-
-Centralized management and fully integration with other directories
-
-Azure Active Directory provides single sign-on to thousands of SaaS applications and on-premises web applications. See the [Azure Active Directory federation compatibility list: third-party identity providers that can be used to implement single sign-on](how-to-connect-fed-compatibility.md) article for more details about the SSO third-party that were tested by Microsoft. This capability enables organization to implement a variety of B2B scenarios while keeping control of the identity and access management. However, during the B2B designing process, is important to understand the authentication method that is used by the partner and validate if this method is supported by Azure. Currently, the following methods are supported by Azure AD:
-
-* Security Assertion Markup Language (SAML)
-* OAuth
-* Kerberos
-* Tokens
-* Certificates
-
-> [!NOTE]
-> read [Azure Active Directory Authentication Protocols](/previous-versions/azure/dn151124(v=azure.100)) to know more details about each protocol and its capabilities in Azure.
->
->
-
-Using the Azure AD support, mobile business applications can use the same easy Mobile Services authentication experience to allow employees to sign into their mobile applications with their corporate Active Directory credentials. With this feature, Azure AD is supported as an identity provider in Mobile Services alongside the other identity providers already supported (which include Microsoft Accounts, Facebook ID, Google ID, and Twitter ID). If the on-premises apps use the userΓÇÖs credential located at the companyΓÇÖs AD DS, the access from partners and users coming from the cloud should be transparent. You can manage userΓÇÖs Conditional Access control to (cloud-based) web applications, web API, Microsoft cloud services, third-party SaaS applications, and native (mobile) client applications, and have the benefits of security, auditing, reporting all in one place. However, it is recommended to validate the implementation in a non-production environment or with a limited number of users.
-
-> [!TIP]
-> it is important to mention that Azure AD does not have Group Policy as AD DS has. In order to enforce policy for devices, you need a mobile device management solution such as [Microsoft Intune](/mem/intune/).
->
->
-
-Once the user is authenticated using Azure AD, it is important to evaluate the level of access that the user has. The level of access that the user has over a resource can vary. While Azure AD can add an additional security layer by controlling access to some resources, keep in mind that the resource itself can also have its own access control list separately, such as the access control for files located in a File Server. The following figure summarizes the levels of access control that you can have in a hybrid scenario:
-
-![access control](./media/plan-hybrid-identity-design-considerations/accesscontrol.png)
-
-Each interaction in the diagram showed in Figure X represents one access control scenario that can be covered by Azure AD. Below you have a description of each scenario:
-
-1. Conditional Access to applications that are hosted on-premises: You can use registered devices with access policies for applications that are configured to use AD FS with Windows Server 2012 R2.
-
-2. Access Control to the Azure portal: Azure also lets you control access to the portal by using Azure role-based access control (Azure RBAC)). This method enables the company to restrict the number of operations that an individual can do in the Azure portal. By using Azure RBAC to control access to the portal, IT Admins can delegate access by using the following access management approaches:
-
- - Group-based role assignment: You can assign access to Azure AD groups that can be synced from your local Active Directory. This lets you leverage the existing investments that your organization has made in tooling and processes for managing groups. You can also use the delegated group management feature of Azure AD Premium.
- - Use built-in roles in Azure: You can use three roles ΓÇö Owner, Contributor, and Reader, to ensure that users and groups have permission to do only the tasks they need to do their jobs.
- - Granular access to resources: You can assign roles to users and groups for a particular subscription, resource group, or an individual Azure resource such as a website or database. In this way, you can ensure that users have access to all the resources they need and no access to resources that they do not need to manage.
-
- > [!NOTE]
- > If you are building applications and want to customize the access control for them, it is also possible to use Azure AD Application Roles for authorization. Review this [WebApp-RoleClaims-DotNet example](https://github.com/AzureADSamples/WebApp-RoleClaims-DotNet) on how to build your app to use this capability.
--
-3. Conditional Access for Microsoft 365 applications with Microsoft Intune: IT admins can provision Conditional Access device policies to secure corporate resources, while at the same time allowing information workers on compliant devices to access the services.
-
-4. Conditional Access for Saas apps: [This feature](https://cloudblogs.microsoft.com/enterprisemobility/2015/06/25/azure-ad-conditional-access-preview-update-more-apps-and-blocking-access-for-users-not-at-work/) allows you to configure per-application multi-factor authentication access rules and the ability to block access for users not on a trusted network. You can apply the multi-factor authentication rules to all users that are assigned to the application, or only for users within specified security groups. Users may be excluded from the multi-factor authentication requirement if they are accessing the application from an IP address that in inside the organizationΓÇÖs network.
-
-Since the options for access control use a multilayer approach, comparison between those options are not applicable for this task. Ensure that you are leveraging all options available for each scenario that requires you to control access to your resources.
-
-## Define incident response options
-Azure AD can assist IT to identity potential security risks in the environment by monitoring userΓÇÖs activity. IT can use Azure AD Access and Usage reports to gain visibility into the integrity and security of your organizationΓÇÖs directory. With this information, an IT admin can better determine where possible security risks may lie so that they can adequately plan to mitigate those risks. [Azure AD Premium subscription](../../fundamentals/active-directory-get-started-premium.md) has a set of security reports that can enable IT to obtain this information. [Azure AD reports](../../reports-monitoring/overview-reports.md) are categorized as follows:
-
-* **Anomaly reports**: Contain sign-in events that were found to be anomalous. The goal is to make you aware of such activity and enable you to make a determination about whether an event is suspicious.
-* **Integrated Application report**: Provides insights into how cloud applications are being used in your organization. Azure Active Directory offers integration with thousands of cloud applications.
-* **Error reports**: Indicate errors that may occur when provisioning accounts to external applications.
-* **User-specific reports**: Display device/sign in activity data for a specific user.
-* **Activity logs**: Contain a record of all audited events within the last 24 hours, last 7 days, or last 30 days, as well as group activity changes, and password reset and registration activity.
-
-> [!TIP]
-> Another report that can also help the Incident Response team working on a case is the [user with leaked credentials](https://cloudblogs.microsoft.com/enterprisemobility/2015/06/15/azure-active-directory-premium-reporting-now-detects-leaked-credentials/) report. This report surfaces any matches between the leaked credentials list and your tenant.
->
--
-Other important built-in reports in Azure AD that can be used during an incident response investigation and are:
-
-* **Password reset activity**: provide the admin with insights into how actively password reset is being used in the organization.
-* **Password reset registration activity**: provides insights into which users have registered their methods for password reset, and which methods they have selected.
-* **Group activity**: provides a history of changes to the group (ex: users added or removed) that were initiated in the Access Panel.
-
-In addition to the core reporting capability of Azure AD Premium that you can use during an Incident Response investigation process, IT can also take advantage of the Audit Report to obtain information such as:
-
-* Changes in role membership (for example, user added to Global Administrator role)
-* Credential updates (for example, password changes)
-* Domain management (for example, verifying a custom domain, removing a domain)
-* Adding or removing applications
-* User management (for example, adding, removing, updating a user)
-* Adding or removing licenses
-
-Since the options for incident response use a multilayer approach, comparison between those options is not applicable for this task. Ensure that you are leveraging all options available for each scenario that requires you to use Azure AD reporting capability as part of your companyΓÇÖs incident response process.
-
-## Next steps
-[Determine hybrid identity management tasks](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md)
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Dataprotection Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-dataprotection-requirements.md
- Title: Hybrid identity design - data protection requirements Azure
-description: When planning your hybrid identity solution, identify the data protection requirements for your business and which options are available to best fulfill these requirements.
------ Previously updated : 01/26/2023-----
-# Plan for enhancing data security through a strong identity solution
-The first step in protecting data is to identify who can access that data. Also, you need to have an identity solution that can integrate with your system to provide authentication and authorization capabilities. Authentication and authorization are often confused with each other and their roles misunderstood. In reality, they are different, as shown in the figure below:
-
-![mobile device lifecycle](./media/plan-hybrid-identity-design-considerations/mobile-devicemgt-lifecycle.png)
-
-**Mobile device management lifecycle stages**
-
-When planning your hybrid identity solution, you must understand the data protection requirements for your business and which options are available to best fulfil these requirements.
-
-> [!NOTE]
-> Once you finish planning for data security, review [Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md) to ensure that your selections regarding multi-factor authentication requirements were not affected by the decisions you made in this section.
->
->
-
-## Determine data protection requirements
-In the age of mobility, most companies have a common goal: enable their users to be productive on their mobile devices, while on-premises, or remotely from anywhere in order to increase productivity. Companies that have such requirements will also be concerned about the number of threats that must be mitigated in order to keep the companyΓÇÖs data secure and maintain userΓÇÖs privacy. Each company might have different requirements in this regard; different compliance rules that will vary according to which industry the company is acting will lead to different design decisions.
-
-However, there are some security aspects that should be explored and validated, regardless of the industry.
-
-## Data protection paths
-![data protection paths](./media/plan-hybrid-identity-design-considerations/data-protection-paths.png)
-
-**Data protection paths**
-
-In the above diagram, the identity component will be the first one to be verified before data is accessed. However, this data can be in different states during the time it was accessed. Each number on this diagram represents a path in which data can be located at some point in time. These numbers are explained below:
-
-1. Data protection at the device level.
-2. Data protection while in transit.
-3. Data protection while at rest on-premises.
-4. Data protection while at rest in the cloud.
-
-It is necessary that the hybrid identity solution is capable of leveraging both on-premises and cloud identity management resources to identify the user before it grants access to the data. When planning your hybrid identity solution, ensure that the following questions are answered according to your organizationΓÇÖs requirements:
-
-## Data protection at rest
-Regardless of where the data is at rest (device, cloud or on-premises), it is important to perform an assessment to understand the organization needs in this regard. For this area, ensure that the following questions are asked:
-
-* Does your company need to protect data at rest?
- * If yes, is the hybrid identity solution able to integrate with your current on-premises infrastructure?
- * If yes, is the hybrid identity solution able to integrate with your workloads located in the cloud?
-* Is the cloud identity management able to protect the userΓÇÖs credentials and other data stored in the cloud?
-
-## Data protection in transit
-Data in transit between the device and the datacenter or between the device and the cloud must be protected. However, being in-transit does not necessarily mean a communications process with a component outside of your cloud service; it moves internally, also, such as between two virtual networks. For this area, ensure that the following questions are asked:
-
-* Does your company need to protect data in transit?
- * If yes, is the hybrid identity solution able to integrate with secure controls such as SSL/TLS?
-* Does the cloud identity management keep the traffic to and within the directory store (within and between datacenters) signed?
-
-## Compliance
-Regulations, laws, and regulatory compliance requirements will vary according to the industry that your company belongs. Companies in high regulated industries must address identity-management concerns related to compliance issues. Regulations such as Sarbanes-Oxley (SOX), the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA) and the Payment Card Industry Data Security Standard (PCI DSS) are strict regarding identity and access. The hybrid identity solution that your company will adopt must have the core capabilities that will fulfill the requirements of one or more of these regulations. For this area, ensure that the following questions are asked:
-
-* Is the hybrid identity solution compliant with the regulatory requirements for your business?
-* Does the hybrid identity solution has built
-* in capabilities that will enable your company to be compliant regulatory requirements?
-
-> [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Define Data Protection Strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) will go over the options available and advantages/disadvantages of each option. By having answered those questions you will select which option best suits your business needs.
->
->
-
-## Next steps
- [Determine content management requirements](plan-hybrid-identity-design-considerations-contentmgt-requirements.md)
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
-
active-directory Plan Hybrid Identity Design Considerations Directory Sync Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-directory-sync-requirements.md
- Title: Hybrid identity design - directory sync requirements Azure
-description: Identify what requirements are needed for synchronizing all the users between on=premises and cloud for the enterprise.
------- Previously updated : 01/19/2023-----
-# Determine directory synchronization requirements
-Synchronization is all about providing users an identity in the cloud based on their on-premises identity. Whether or not they will use synchronized account for authentication or federated authentication, the users will still need to have an identity in the cloud. This identity will need to be maintained and updated periodically. The updates can take many forms, from title changes to password changes.
-
-Start by evaluating the organizations on-premises identity solution and user requirements. This evaluation is important to define the technical requirements for how user identities will be created and maintained in the cloud. For a majority of organizations, Active Directory is on-premises and will be the on-premises directory that users will by synchronized from, however in some cases this will not be the case.
-
-Make sure to answer the following questions:
-
-* Do you have one AD forest, multiple, or none?
-
- * How many Azure AD directories will you be synchronizing to?
-
- 1. Are you using filtering?
- 2. Do you have multiple Azure AD Connect servers planned?
-* Do you currently have a synchronization tool on-premises?
-
- * If yes, does your users if users have a virtual directory/integration of identities?
-* Do you have any other directory on-premises that you want to synchronize (e.g. LDAP Directory, HR database, etc)?
- * Are you going to be doing any GALSync?
- * What is the current state of UPNs in your organization?
- * Do you have a different directory that users authenticate against?
- * Does your company use Microsoft Exchange?
- * Do they plan of having a hybrid exchange deployment?
-
-Now that you have an idea about your synchronization requirements, you need to determine which tool is the correct one to meet these requirements. Microsoft provides several tools to accomplish directory integration and synchronization. See the [Hybrid Identity directory integration tools comparison table](plan-hybrid-identity-design-considerations-tools-comparison.md) for more information.
-
-Now that you have your synchronization requirements and the tool that will accomplish this for your company, you need to evaluate the applications that use these directory services. This evaluation is important to define the technical requirements to integrate these applications to the cloud. Make sure to answer the following questions:
-
-* Will these applications be moved to the cloud and use the directory?
-* Are there special attributes that need to be synchronized to the cloud so these applications can use them successfully?
-* Will these applications need to be re-written to take advantage of cloud auth?
-* Will these applications continue to live on-premises while users access them using the cloud identity?
-
-You also need to determine the security requirements and constraints directory synchronization. This evaluation is important to get a list of the requirements that will be needed in order to create and maintain userΓÇÖs identities in the cloud. Make sure to answer the following questions:
-
-* Where will the synchronization server be located?
-* Will it be domain joined?
-* Will the server be located on a restricted network behind a firewall, such as a DMZ?
- * Will you be able to open the required firewall ports to support synchronization?
-* Do you have a disaster recovery plan for the synchronization server?
-* Do you have an account with the correct permissions for all forests you want to synch with?
- * If your company doesnΓÇÖt know the answer for this question, review the section ΓÇ£Permissions for password synchronizationΓÇ¥ in the article [Install the Azure Active Directory Sync Service](/previous-versions/azure/azure-services/dn757602(v=azure.100)#BKMK_CreateAnADAccountForTheSyncService) and determine if you already have an account with these permissions or if you need to create one.
-* If you have multi-forest sync is the sync server able to get to each forest?
-
-> [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Determine incident response requirements](plan-hybrid-identity-design-considerations-incident-response-requirements.md) will go over the options available. By having answered those questions you will select which option best suits your business needs.
->
->
-
-## Next steps
-[Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md)
-
-## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Hybrid Id Management Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md
- Title: Hybrid identity design - management tasks Azure
-description: Azure AD checks the specific conditions you pick when authenticating the user and before allowing access to the application with Conditional Access control.
------ Previously updated : 01/19/2023-----
-# Plan for Hybrid Identity Lifecycle
-Identity is one of the foundations of your enterprise mobility and application access strategy. Whether you are signing on to your mobile device or SaaS app, your identity is the key to gaining access to everything. At its highest level, an identity management solution encompasses unifying and syncing between your identity repositories, which includes automating and centralizing the process of provisioning resources. The identity solution should be a centralized identity across on-premises and cloud and also use some form of identity federation to maintain centralized authentication and securely share and collaborate with external users and businesses. Resources range from operating systems and applications to people in, or affiliated with, an organization. Organizational structure can be altered to accommodate the provisioning policies and procedures.
-
-It is also important to have an identity solution geared to empower your users by providing them with self-service experiences to keep them productive. Your identity solution is more robust if it enables single sign-on for users across all the resources they need access. Administrators at all levels can use standardized procedures for managing user credentials. Some levels of administration can be reduced or eliminated, depending on the breadth of the provisioning management solution. Furthermore, you can securely distribute administration capabilities, manually or automatically, among various organizations. For example, a domain administrator can serve only the people and resources in that domain. This user can do administrative and provisioning tasks, but is not authorized to do configuration tasks, such as creating workflows.
-
-## Determine Hybrid Identity Management Tasks
-Distributing administrative tasks in your organization improves the accuracy and effectiveness of administration and improves the balance of the workload of an organization. Following are the pivots that define a robust identity management system.
-
- ![identity management considerations](./media/plan-hybrid-identity-design-considerations/Identity_management_considerations.png)
-
-To define hybrid identity management tasks, you must understand some essential characteristics of the organization that will be adopting hybrid identity. It is important to understand the current repositories being used for identity sources. By knowing those core elements, you will have the foundational requirements and based on that you will need to ask more granular questions that will lead you to a better design decision for your Identity solution.
-
-While defining those requirements, ensure that at least the following questions are answered
-
-* Provisioning options:
-
- * Does the hybrid identity solution support a robust account access management and provisioning system?
- * How are users, groups, and passwords going to be managed?
- * Is the identity lifecycle management responsive?
- * How long does password updates account suspension take?
-* License management:
-
- * Does the hybrid identity solution handles license management?
- * If yes, what capabilities are available?
- * Does the solution handle group-based license management?
-
- * If yes, is it possible to assign a security group to it?
- * If yes, will the cloud directory automatically assign licenses to all the members of the group?
- * What happens if a user is subsequently added to, or removed from the group, will a license be automatically assigned or removed as appropriate?
-* Integration with other third-party identity providers:
- * Can this hybrid solution be integrated with third-party identity providers to implement single sign-on?
- * Is it possible to unify all the different identity providers into a cohesive identity system?
- * If yes, how and which are they and what capabilities are available?
-
-## Synchronization Management
-One of the goals of an identity manager, to be able to bring all the identity providers and keep them synchronized. You keep the data synchronized based on an authoritative master identity provider. In a hybrid identity scenario, with a synchronized management model, you manage all user and device identities in an on-premises server and synchronize the accounts and, optionally, passwords to the cloud. The user enters the same password on-premises as they do in the cloud, and at sign-in, the password is verified by the identity solution. This model uses a directory synchronization tool.
-
-![directory sync](./media/plan-hybrid-identity-design-considerations/Directory_synchronization.png)
-To proper design the synchronization of your hybrid identity solution ensure that the following questions are answered:
-* What are the sync solutions available for the hybrid identity solution?
-* What are the single sign on capabilities available?
-* What are the options for identity federation between B2B and B2C?
-
-## Next steps
-[Determine hybrid identity management adoption strategy](plan-hybrid-identity-design-considerations-lifecycle-adoption-strategy.md)
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
-
active-directory Plan Hybrid Identity Design Considerations Identity Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-identity-adoption-strategy.md
- Title: Hybrid identity design - adoption strategy Azure
-description: With Conditional Access control, Azure AD checks the specific conditions you pick when authenticating the user and before allowing access to the application.
------ Previously updated : 01/27/2023-----
-# Define a hybrid identity adoption strategy
-In this task, you define the hybrid identity adoption strategy for your hybrid identity solution to meet the business requirements that were discussed in:
-
-* [Determine business needs](plan-hybrid-identity-design-considerations-business-needs.md)
-* [Determine directory synchronization requirements](plan-hybrid-identity-design-considerations-directory-sync-requirements.md)
-* [Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md)
-
-## Define business needs strategy
-The first task addresses determining the organizations business needs. This task can be broad and scope creep can occur if you aren't careful. In the beginning, keep it simple but always remember to plan for a design that will accommodate and facilitate change in the future. Regardless of whether it's a simple design or a complex one, Azure Active Directory is the Microsoft Identity platform that supports Microsoft 365, Microsoft Online Services, and cloud aware applications.
-
-## Define an integration strategy
-Microsoft has three main integration scenarios: cloud identities, synchronized identities, and federated identities. You should plan on adopting one of these integration strategies. The strategy you choose can vary. Decisions in choosing one may include, what type of user experience you want to provide, do you've an existing infrastructure, and what is the most cost effective.
-
-![integration scenarios](./media/plan-hybrid-identity-design-considerations/integration-scenarios.png)
-
-The scenarios defined in the above figure are:
-
-* **Cloud identities**: identities that exist solely in the cloud. For Azure AD, they would reside specifically in your Azure AD directory.
-* **Synchronized**: identities that exist on-premises and in the cloud. Using Azure AD Connect, users are either created or joined with existing Azure AD accounts. The userΓÇÖs password hash is synchronized from the on-premises environment to the cloud in what is called a password hash. Remember that if a user is disabled in the on-premises environment, it can take up to three hours for that account status to show up in Azure AD. This behavior is due to the synchronization time interval.
-* **Federated**: identities exist both on-premises and in the cloud. Using Azure AD Connect, users are either created or joined with existing Azure AD accounts.
-
-> [!NOTE]
-> For more information about the Synchronization options, read [Integrating your on-premises identities with Azure Active Directory](../whatis-hybrid-identity.md).
->
->
-
-The following table helps in determining the advantages and disadvantages of each of the following strategies:
-
-| Strategy | Advantages | Disadvantages |
-| | | |
-| **Cloud identities** |Easier to manage for small organization. <br> Nothing to install on-premises. No extra hardware needed<br>Easily disabled if the user leaves the company |Users will need to sign in when accessing workloads in the cloud <br> Passwords may or may not be the same for cloud and on-premises identities |
-| **Synchronized** |On-premises password authenticates both on-premises and cloud directories <br>Easier to manage for small, medium, or large organizations <br>Users can have single sign-on (SSO) for some resources <br> Microsoft preferred method for synchronization <br> Easier to manage |Some customers may be reluctant to synchronize their directories with the cloud due specific companyΓÇÖs policies |
-| **Federated** |Users can have single sign-on (SSO) <br>If a user is terminated or leaves, the account can be immediately disabled and access revoked,<br> Supports advanced scenarios that can't be accomplished with synchronized |More steps to set up and configure <br> Higher maintenance <br> May require extra hardware for the STS infrastructure <br> May require extra hardware to install the federation server. Other software is required if AD FS is used <br> Require extensive setup for SSO <br> Critical point of failure if the federation server is down, users wonΓÇÖt be able to authenticate |
-
-### Client experience
-The strategy that you use will dictate the user sign-in experience. The following tables provide you with information on what the users should expect their sign-in experience to be. Not all federated identity providers support SSO in all scenarios.
-
-**Domain-joined and private network applications**:
-
-| Application | Synchronized Identity | Federated Identity |
-| | | |
-| Web Browsers |Forms-based authentication |single sign-on, sometimes required to supply organization ID |
-| Outlook |Prompt for credentials |Prompt for credentials |
-| Skype for Business (Lync) |Prompt for credentials |single sign-on for Lync, prompted credentials for Exchange |
-| OneDrive for Business |Prompt for credentials |single sign-on |
-| Office Pro Plus Subscription |Prompt for credentials |single sign-on |
-
-**External or untrusted sources**:
-
-| Application | Synchronized Identity | Federated Identity |
-| | | |
-| Web Browsers |Forms-based authentication |Forms-based authentication |
-| Outlook, Skype for Business (Lync), OneDrive for Business, Office subscription |Prompt for credentials |Prompt for credentials |
-| Exchange ActiveSync |Prompt for credentials |single sign-on for Lync, prompted credentials for Exchange |
-| Mobile apps |Prompt for credentials |Prompt for credentials |
-
-If you've a third-party IdP or are going to use one to provide federation with Azure AD, you need to be aware of the following supported capabilities:
-
-* Any SAML 2.0 provider that is compliant for the SP-Lite profile can support authentication to Azure AD and associated applications
-* Supports passive authentication, which facilitates authentication to OWA, SPO, etc.
-* Exchange Online clients can be supported via the SAML 2.0 Enhanced Client Profile (ECP)
-
-You must also be aware of what capabilities won't be available:
-
-* Without WS-Trust/Federation support, all other active clients break
- * That means no Lync client, OneDrive client, Office Subscription, Office Mobile prior to Office 2016
-* Transition of Office to passive authentication allows them to support pure SAML 2.0 IdPs, but support will still be on a client-by-client basis
-
-> [!NOTE]
-> For the most updated list read the article [Azure AD federation compatibility list](how-to-connect-fed-compatibility.md).
->
->
-
-## Define synchronization strategy
-This task defines the tools that will be used to synchronize the organizationΓÇÖs on-premises data to the cloud and what topology you should use. Because, most organizations use Active Directory, information on using Azure AD Connect to address the questions above is provided in some detail. For environments that don't have Active Directory, there's information about using FIM 2010 R2 or MIM 2016 to help plan this strategy. However, future releases of Azure AD Connect will support LDAP directories, so depending on your timeline, this information may be able to assist.
-
-### Synchronization tools
-Over the years, several synchronization tools have existed and used for various scenarios. Currently Azure AD Connect is the go to tool of choice for all supported scenarios. Azure AD Sync and DirSync are also still around and may even be present in your environment now.
-
-> [!NOTE]
-> For the latest information regarding the supported capabilities of each tool, read [Directory integration tools comparison](plan-hybrid-identity-design-considerations-tools-comparison.md) article.
->
->
-
-### Supported topologies
-When defining a synchronization strategy, the topology that is used must be determined. Depending on the information that was determined in step 2 you can determine which topology is the proper one to use.
-The single forest, single Azure AD topology is the most common and consists of a single Active Directory forest and a single instance of Azure AD. This topology is going to be used in most scenarios and is the expected topology when using Azure AD Connect Express installation as shown in the figure below.
-
-![Supported topologies](./media/plan-hybrid-identity-design-considerations/single-forest.png)
-Single Forest Scenario
-It's common for large and even small organizations to have multiple forests, as shown in Figure 5.
-
-> [!NOTE]
-> For more information about the different on-premises and Azure AD topologies with Azure AD Connect sync read the article [Topologies for Azure AD Connect](plan-connect-topologies.md).
->
->
-
-![multi-forest topology](./media/plan-hybrid-identity-design-considerations/multi-forest.png)
-
-Multi-Forest Scenario
-
-The multi-forest single Azure AD topology should be considered if the following items are true:
-
-* Users have only 1 identity across all forests ΓÇô the uniquely identifying users section below describes this scenario in more detail.
-* The user authenticates to the forest in which their identity is located
-* UPN and Source Anchor (immutable ID) will come from this forest
-* All forests are accessible by Azure AD Connect ΓÇô meaning it does not need to be domain joined and can be placed in a DMZ.
-* Users have only one mailbox
-* The forest that hosts a userΓÇÖs mailbox has the best data quality for attributes visible in the Exchange Global Address List (GAL)
-* If there's no mailbox on the user, then any forest may be used to contribute values
-* If you've a linked mailbox, then there's also another account in a different forest used to sign in.
-
-> [!NOTE]
-> Objects that exist in both on-premises and in the cloud are ΓÇ£connectedΓÇ¥ via a unique identifier. In the context of Directory Synchronization, this unique identifier is referred to as the SourceAnchor. In the context of Single Sign-On, this identifier is referred to as the ImmutableId. [Design concepts for Azure AD Connect](plan-connect-design-concepts.md#sourceanchor) for more considerations regarding the use of SourceAnchor.
->
->
-
-If the above aren't true and you've more than one active account or more than one mailbox, Azure AD Connect will pick one and ignore the other. If you've linked mailboxes but no other account, accounts won't be exported to Azure AD and that user won't be a member of any groups. This behavior is different from how it was in the past with DirSync and is intentional to better support multi-forest scenarios. A multi-forest scenario is shown in the figure below.
-
-![multiple Azure AD tenants](./media/plan-hybrid-identity-design-considerations/multiforest-multipleAzureAD.png)
-
-**Multi-forest multiple Azure AD scenario**
-
-It's recommended to have just a single directory in Azure AD for an organization. However, it's supported if a 1:1 relationship is kept between an Azure AD Connect sync server and an Azure AD directory. For each instance of Azure AD, you need an installation of Azure AD Connect. Also, Azure AD, by design is isolated and users in one instance of Azure AD, won't be able to see users in another instance.
-
-It's possible and supported to connect one on-premises instance of Active Directory to multiple Azure AD directories as shown in the figure below:
-
-![single forest filtering](./media/plan-hybrid-identity-design-considerations/single-forest-flitering.png)
-
-**Single-forest filtering scenario**
-
-The following statements must be true:
-
-* Azure AD Connect sync servers must be configured for filtering so they each have a mutually exclusive set of objects. This done, for example, by scoping each server to a particular domain or OU.
-* A DNS domain can only be registered in a single Azure AD directory so the UPNs of the users in the on-premises AD must use separate namespaces
-* Users in one instance of Azure AD will only be able to see users from their instance. They won't be able to see users in the other instances
-* Only one of the Azure AD directories can enable Exchange hybrid with the on-premises AD
-* Mutual exclusivity also applies to write-back. Thus, some write-back features aren't supported with this topology since it's assumed to be a single on-premises configuration.
- * Group write-back with default configuration
- * Device write-back
-
-The following items aren't supported and should not be chosen as an implementation:
-
-* It isn't supported to have multiple Azure AD Connect sync servers connecting to the same Azure AD directory even if they are configured to synchronize mutually exclusive set of object
-* It's unsupported to sync the same user to multiple Azure AD directories.
-* It's also unsupported to make a configuration change to make users in one Azure AD to appear as contacts in another Azure AD directory.
-* It's also unsupported to modify Azure AD Connect sync to connect to multiple Azure AD directories.
-* Azure AD directories are by design isolated. It's unsupported to change the configuration of Azure AD Connect sync to read data from another Azure AD directory in an attempt to build a common and unified GAL between the directories. It's also unsupported to export users as contacts to another on-premises AD using Azure AD Connect sync.
-
-> [!NOTE]
-> If your organization restricts computers on your network from connecting to the Internet, this article lists the endpoints (FQDNs, IPv4, and IPv6 address ranges) that you should include in your outbound allow lists and Internet Explorer Trusted Sites Zone of client computers to ensure your computers can successfully use Microsoft 365. For more information read [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2?ui=en-US&rs=en-US&ad=US).
->
->
-
-## Define multi-factor authentication strategy
-In this task, you'll define the multi-factor authentication strategy to use. Azure AD Multi-Factor Authentication comes in two different versions. One is a cloud-based and the other is on-premises based using the Azure MFA Server. Based on the evaluation you did above you can determine which solution is the correct one for your strategy. Use the table below to determine which design option best fulfills your companyΓÇÖs security requirement:
-
-Multi-factor design options:
-
-| Asset to secure | MFA in the cloud | MFA on-premises |
-| | | |
-| Microsoft apps |yes |yes |
-| SaaS apps in the app gallery |yes |yes |
-| IIS applications published through Azure AD App Proxy |yes |yes |
-| IIS applications not published through the Azure AD App Proxy |no |yes |
-| Remote access as VPN, RDG |no |yes |
-
-Even though you may have settled on a solution for your strategy, you still need to use the evaluation from above. This decision may cause the solution to change. Use the table below to assist you determining this:
-
-| User location | Preferred design option |
-| | |
-| Azure Active Directory |Multi-FactorAuthentication in the cloud |
-| Azure AD and on-premises AD using federation with AD FS |Both |
-| Azure AD and on-premises AD using Azure AD Connect no password sync |Both |
-| Azure AD and on-premises using Azure AD Connect with password sync |Both |
-| On-premises AD |Multi-Factor Authentication Server |
-
-> [!NOTE]
-> You should also ensure that the multi-factor authentication design option that you selected supports the features that are required for your design. For more information read [Choose the multi-factor security solution for you](../../authentication/concept-mfa-howitworks.md).
->
-
-## Multi-Factor Auth Provider
-Multi-factor authentication is available by default for Hybrid Identity Administrators who have an Azure Active Directory tenant. However, if you wish to extend multi-factor authentication to all of your users and/or want to your Hybrid Identity Administrators to be able to take advantage features such as the management portal, custom greetings, and reports, then you must purchase and configure Multi-Factor Authentication Provider.
-
-> [!NOTE]
-> You should also ensure that the multi-factor authentication design option that you selected supports the features that are required for your design.
->
->
-
-## Next steps
-[Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md)
-
-## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
-
active-directory Plan Hybrid Identity Design Considerations Incident Response Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-incident-response-requirements.md
- Title: Hybrid identity design - incident response requirements Azure
-description: Determine monitoring and reporting capabilities for the hybrid identity solution that can be leveraged by IT to take actions to identify and mitigate a potential threat.
------- Previously updated : 01/27/2023-----
-# Determine incident response requirements for your hybrid identity solution
-Large or medium organizations most likely will have a [security incident response](/previous-versions/tn-archive/cc700825(v=technet.10)) in place to help IT take actions accordingly to the level of incident. The identity management system is an important component in the incident response process because it can be used to help identifying who performed a specific action against the target. The hybrid identity solution must be able to provide monitoring and reporting capabilities that can be leveraged by IT to take actions to identify and mitigate a potential threat. In a typical incident response plan you'll have the following phases as part of the plan:
-
-1. Initial assessment.
-2. Incident communication.
-3. Damage control and risk reduction.
-4. Identification of what it was compromise and severity.
-5. Evidence preservation.
-6. Notification to appropriate parties.
-7. System recovery.
-8. Documentation.
-9. Damage and cost assessment.
-10. Process and plan revision.
-
-During the identification of what it was compromise and severity- phase, it will be necessary to identify the systems that have been compromised, files that have been accessed and determine the sensitivity of those files. Your hybrid identity system should be able to fulfill these requirements to assist you identifying the user that made those changes.
-
-## Monitoring and reporting
-Many times the identity system can also help in initial assessment phase mainly if the system has built in auditing and reporting capabilities. During the initial assessment, IT Admin must be able to identify a suspicious activity, or the system should be able to trigger it automatically based on a pre-configured task. Many activities could indicate a possible attack, however in other cases, a badly configured system might lead to a number of false positives in an intrusion detection system.
-
-The identity management system should assist IT admins to identify and report those suspicious activities. Usually these technical requirements can be fulfilled by monitoring all systems and having a reporting capability that can highlight potential threats. Use the questions below to help you design your hybrid identity solution while taking into consideration incident response requirements:
-
-* Does your company have a security incident response in place?
- * If yes, is the current identity management system used as part of the process?
-* Does your company need to identify suspicious sign-on attempts from users across different devices?
-* Does your company need to detect potential compromised userΓÇÖs credentials?
-* Does your company need to audit userΓÇÖs access and action?
-* Does your company need to know when a user resets their password?
-
-## Policy enforcement
-During damage control and risk reduction-phase, it is important to quickly reduce the actual and potential effects of an attack. That action that you'll take at this point can make the difference between a minor and a major one. The exact response will depend on your organization and the nature of the attack that you face. If the initial assessment concluded that an account was compromised, you'll need to enforce policy to block this account. ThatΓÇÖs just one example where the identity management system will be leveraged. Use the questions below to help you design your hybrid identity solution while taking into consideration how policies will be enforced to react to an ongoing incident:
-
-* Does your company have policies in place to block users from access the network if necessary?
- * If yes, does the current solution integrate with the hybrid identity management system that you're going to adopt?
-* Does your company need to enforce Conditional Access for users that are in quarantine?
-
-> [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Define data protection strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) will go over the options available and advantages/disadvantages of each option. By having answered those questions you'll select which option best suits your business needs.
->
->
-
-## Next steps
-[Define data protection strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md)
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Lifecycle Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-lifecycle-adoption-strategy.md
- Title: Hybrid identity design - lifecycle adoption strategy Azure
-description: Helps define the hybrid identity management tasks according to the options available for each lifecycle phase.
------ Previously updated : 01/19/2023-----
-# Determine hybrid identity lifecycle adoption strategy
-In this task, youΓÇÖll define the identity management strategy for your hybrid identity solution to meet the business requirements that you defined in [Determine hybrid identity management tasks](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md).
-
-To define the hybrid identity management tasks according to the end-to-end identity lifecycle presented earlier in this step, you will have to consider the options available for each lifecycle phase.
-
-## Access management and provisioning
-With a good account access management solution, your organization can track precisely who has access to what information across the organization.
-
-Access control is a critical function of a centralized, single-point provisioning system. Besides protecting sensitive information, access controls expose existing accounts that have unapproved authorizations or are no longer necessary. To control obsolete accounts, the provisioning system links together account information with authoritative information about the users who own the accounts. Authoritative user identity information is typically maintained in the databases and directories of human resources.
-
-Accounts in sophisticated IT enterprises include hundreds of parameters that define the authorities, and these details can be controlled by your provisioning system. New users can be identified with the data that you provide from the authoritative source. The access request approval capability initiates the processes that approve (or reject) resource provisioning for them.
-
-| Lifecycle management phase | On premises | Cloud | Hybrid |
-| | | | |
-| Account Management and Provisioning |By using the Active Directory® Domain Services (AD DS) server role, you can create a scalable, secure, and manageable infrastructure for user and resource management, and provide support for directory-enabled applications such as Microsoft® Exchange Server. <br><br> [You can provision groups in AD DS through an Identity manager](/previous-versions/mim/ff686261(v=ws.10)) <br>[You can provision users in AD DS](/previous-versions/mim/ff686263(v=ws.10)) <br><br> Administrators can use access control to manage user access to shared resources for security purposes. In Active Directory, access control is administered at the object level by setting different levels of access, or permissions, to objects, such as Full Control, Write, Read, or No Access. Access control in Active Directory defines how different users can use Active Directory objects. By default, permissions on objects in Active Directory are set to the most secure setting. |You have to create an account for every user who will access a Microsoft cloud service. You can also change user accounts or delete them when they’re no longer needed. By default, users do not have administrator permissions, but you can optionally assign them. <br><br> Within Azure Active Directory, one of the major features is the ability to manage access to resources. These resources can be part of the directory, as in the case of permissions to manage objects through roles in the directory, or resources that are external to the directory, such as SaaS applications, Azure services, and SharePoint sites or on-premises resources. <br><br> At the center of Azure Active Directory’s access management solution is the security group. The resource owner (or the administrator of the directory) can assign a group to provide a certain access right to the resources they own. The members of the group will be provided the access, and the resource owner can delegate the right to manage the members list of a group to someone else – such as a department manager or a helpdesk administrator<br> <br> The Managing groups in Azure AD section, provides more information on managing access through groups. |Extend Active Directory identities into the cloud through synchronization and federation |
-
-## Role-based access control
-Azure role-based access control (Azure RBAC) uses roles and provisioning policies to evaluate, test, and enforce your business processes and rules for granting access to users. Key administrators create provisioning policies and assign users to roles and that define sets of entitlements to resources for these roles. Azure RBAC extends the identity management solution to use software-based processes and reduce user manual interaction in the provisioning process.
-Azure RBAC enables the company to restrict the number of operations that an individual can do once they have access to the Azure portal. By using Azure RBAC to control access to the portal, IT Admins ca delegate access by using the following access management approaches:
-
-* **Group-based role assignment**: You can assign access to Azure AD groups that can be synced from your local Active Directory. This enables you to leverage the existing investments that your organization has made in tooling and processes for managing groups. You can also use the delegated group management feature of Azure AD Premium.
-* **Leverage built in roles in Azure**: You can use three roles ΓÇö Owner, Contributor, and Reader, to ensure that users and groups have permission to do only the tasks they need to do their jobs.
-* **Granular access to resources**: You can assign roles to users and groups for a particular subscription, resource group, or an individual Azure resource such as a website or database. In this way, you can ensure that users have access to all the resources they need and no access to resources that they do not need to manage.
-
-## Provisioning and other customization options
-Your team can use business plans and requirements to decide how much to customize the identity solution. For example, a large enterprise might require a phased roll-out plan for workflows and custom adapters that is based on a time line for incrementally provisioning applications that are widely used across geographies. Another customization plan might provide for two or more applications to be provisioned across an entire organization, after successful testing. User-application interaction can be customized, and procedures for provisioning resources might be changed to accommodate automated provisioning.
-
-You can deprovision to remove a service or component. For example, deprovisioning an account means that the account is deleted from a resource.
-
-The hybrid model of provisioning resources combines request and role-based approaches, which are both supported by Azure AD. For a subset of employees or managed systems, a business might want to automate access with role-based assignment. A business might also handle all other access requests or exceptions through a request-based model. Some businesses might start with manual assignment, and evolve toward a hybrid model, with an intention of a fully role-based deployment at a future time.
-
-Other companies might find it impractical for business reasons to achieve complete role-based provisioning, and target a hybrid approach as a wanted goal. Still other companies might be satisfied with only request-based provisioning, and not want to invest additional effort to define and manage role-based, automated provisioning policies.
-
-## License management
-Group-based license management in Azure AD lets administrators assign users to a security group and Azure AD automatically assigns licenses to all the members of the group. If a user is subsequently added to, or removed from the group, a license will be automatically assigned or removed as appropriate.
-
-You can use groups you synchronize from on-premises AD or manage in Azure AD. Pairing this up with Azure AD premium Self-Service Group Management you can easily delegate license assignment to the appropriate decision makers. You can be assured that problems like license conflicts and missing location data are automatically sorted out.
-
-## Self-regulating user administration
-When your organization starts to provision resources across all internal organizations, you implement the self-regulating user administration capability. You can realize the advantages and benefits of provisioning users across organizational boundaries. In this environment, a change in a user's status is automatically reflected in access rights across organization boundaries and geographies. You can reduce provisioning costs and streamline the access and approval processes. The implementation realizes the full potential of implementing role-based access control for end-to-end access management in your organization. You can reduce administrative costs through automated procedures for governing user provisioning. You can improve security by automating security policy enforcement, and streamline and centralize user lifecycle management and resource provisioning for large user populations.
-
-> [!NOTE]
-> For more information, see Setting up Azure AD for self service application access management
->
->
-
-License-based (Entitlement-based) Azure AD services work by activating a subscription in your Azure AD directory/service tenant. Once the subscription is active the service capabilities can be managed by directory/service administrators and used by licensed users.
-
-## Integration with other 3rd party providers
-
-Azure Active Directory provides single-sign on and enhanced application access security to thousands of SaaS applications and on-premises web applications. For more information, see [Integrating applications with Azure Active Directory](../../develop/quickstart-register-app.md)
-
-## Define synchronization management
-Integrating your on-premises directories with Azure AD makes your users more productive by providing a common identity for accessing both cloud and on-premises resources. With this integration, users and organizations can take advantage of the following:
-
-* Organizations can provide users with a common hybrid identity across on-premises or cloud-based services leveraging Windows Server Active Directory and then connecting to Azure Active Directory.
-* Administrators can provide Conditional Access based on application resource, device and user identity, network location and multi-factor authentication.
-* Users can leverage their common identity through accounts in Azure AD to Microsoft 365, Intune, SaaS apps, and third-party applications.
-* Developers can build applications that leverage the common identity model, integrating applications into Active Directory on-premises or Azure for cloud-based applications
-
-The following figure has an example of a high-level view of identity synchronization process.
-
-![Sync](./media/plan-hybrid-identity-design-considerations/identitysync.png)
-
-Identity synchronization process
-
-Review the following table to compare the synchronization options:
-
-| Synchronization Management Option | Advantages | Disadvantages |
-| | | |
-| Sync-based (through DirSync or AADConnect) |Users, and groups synchronized from on-premises and cloud <br> **Policy control**: Account policies can be set through Active Directory, which gives the administrator the ability to manage password policies, workstation, restrictions, lock-out controls, and more, without having to perform additional tasks in the cloud. <br> **Access control**: Can restrict access to the cloud service so that, the services can be accessed through the corporate environment, through online servers, or both. <br> Reduced support calls: If users have fewer passwords to remember, they are less likely to forget them. <br> Security: User identities and information are protected because all of the servers and services used in single sign-on, are mastered and controlled on-premises. <br> Support for strong authentication: You can use strong authentication (also called two-factor authentication) with the cloud service. However, if you use strong authentication, you must use single sign-on. | |
-| Federation-based (through AD FS) |Enabled by Security Token Service (STS). When you configure an STS to provide single sign-on access with a Microsoft cloud service, you will be creating a federated trust between your on-premises STS and the federated domain youΓÇÖve specified in your Azure AD tenant. <br> Allows end users to use the same set of credentials to obtain access to multiple resources <br>end users do not have to maintain multiple sets of credentials. Yet, the users have to provide their credentials to each one of the participating resources.,B2B and B2C scenarios supported. |Requires specialized personnel for deployment and maintenance of dedicated on premises AD FS servers. There are restrictions on the use of strong authentication if you plan to use AD FS for your STS. For more information, see [Configuring Advanced Options for AD FS 2.0](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh237448(v=ws.10)). |
-
-> [!NOTE]
-> For more information see, [Integrating your on-premises identities with Azure Active Directory](../whatis-hybrid-identity.md).
->
->
-
-## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Multifactor Auth Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md
- Title: Hybrid identity design - multi-factor authentication requirements Azure
-description: With Conditional Access control, Azure AD verifies the specific conditions you pick when authenticating the user and before allowing access to the application.
------- Previously updated : 01/19/2023-----
-# Determine multi-factor authentication requirements for your hybrid identity solution
-In this world of mobility, with users accessing data and applications in the cloud and from any device, securing this information has become paramount. Every day there is a new headline about a security breach. Although, there is no guarantee against such breaches, multi-factor authentication, provides an additional layer of security to help prevent these breaches.
-Start by evaluating the organizations requirements for multi-factor authentication. That is, what is the organization trying to secure. This evaluation is important to define the technical requirements for setting up and enabling the organizations users for multi-factor authentication.
-
-Make sure to answer the following:
-
-* Is your company trying to secure Microsoft apps?
-* How these apps are published?
-* Does your company provide remote access to allow employees to access on-premises apps?
-
-If yes, what type of remote access?You also need to evaluate where the users who are accessing these applications will be located. This evaluation is another important step to define the proper multi-factor authentication strategy. Make sure to answer the following questions:
-
-* Where are the users going to be located?
-* Can they be located anywhere?
-* Does your company want to establish restrictions according to the userΓÇÖs location?
-
-Once you understand these requirements, it is important to also evaluate the userΓÇÖs requirements for multi-factor authentication. This evaluation is important because it will define the requirements for rolling out multi-factor authentication. Make sure to answer the following questions:
-
-* Are the users familiar with multi-factor authentication?
-* Will some uses be required to provide additional authentication?
- * If yes, all the time, when coming from external networks, or accessing specific applications, or under other conditions?
-* Will the users require training on how to setup and implement multi-factor authentication?
-* What are the key scenarios that your company wants to enable multi-factor authentication for their users?
-
-After answering the previous questions, you will be able to understand if there are multi-factor authentication already implemented on-premises. This evaluation is important to define the technical requirements for setting up and enabling the organizations users for multi-factor authentication. Make sure to answer the following questions:
-
-* Does your company need to protect privileged accounts with MFA?
-* Does your company need to enable MFA for certain application for compliance reasons?
-* Does your company need to enable MFA for all eligible users of these application or only administrators?
-* Do you need have MFA always enabled or only when the users are logged outside of your corporate network?
-
-## Next steps
-[Define a hybrid identity adoption strategy](plan-hybrid-identity-design-considerations-identity-adoption-strategy.md)
-
-## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
-
active-directory Plan Hybrid Identity Design Considerations Nextsteps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-nextsteps.md
- Title: Azure Active Directory hybrid identity design considerations- next steps| Microsoft Docs
-description: A synopsis and next steps after you have read the Hybrid Identity design considerations guide
------- Previously updated : 01/19/2023-----
-# Azure Active Directory hybrid identity design considerations- next steps
-Now that youΓÇÖve completed defining your requirements and examining all the options for your mobile device management solution, youΓÇÖre ready to take the next steps for deploying the supporting infrastructure thatΓÇÖs right for you and your organization.
-
-## Hybrid identity documentation
-Conceptual and procedural planning, deployment, and administration content are useful when implementing your mobile device management solution:
-
-* [Microsoft System Center](/previous-versions/system-center/developer/cc817313(v=msdn.10)) solutions can help you capture and aggregate knowledge about your infrastructure, policies, processes, and best practices so that your IT staff can build manageable systems and automate operations.
-* [Microsoft Intune](/mem/intune/) is a cloud-based device management service that helps you to manage your computers and mobile devices and to secure your companyΓÇÖs information.
-* [MDM for Microsoft 365](/microsoft-365/admin/basic-mobility-security/overview) allows you to manage and secure mobile devices when they're connected to your Microsoft 365 organization. You can use MDM for Microsoft 365 to set device security policies and access rules, and to wipe mobile devices if theyΓÇÖre lost or stolen.
-
-## Hybrid identity resources
-Monitoring the following resources often provides the latest news and updates on mobile device management solutions:
-
-* [Microsoft Enterprise Mobility blog](https://cloudblogs.microsoft.com/ENTERPRISEMOBILITY/)
-* [Microsoft In The Cloud blog](https://cloudblogs.microsoft.com/)
-* [Microsoft Intune blog](https://techcommunity.microsoft.com/t5/intune-customer-success/welcome-to-the-new-intune-customer-success-blog/ba-p/281367)
-* [Microsoft Configuration Manager blog](https://techcommunity.microsoft.com/t5/Configuration-Manager-Blog/bg-p/ConfigurationManagerBlog)
-
-## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-overview.md
- Title: Azure Active Directory hybrid identity design considerations - overview
-description: Overview and content map of Hybrid Identity design considerations guide
------- Previously updated : 01/19/2023-----
-# Azure Active Directory Hybrid Identity Design Considerations
-Consumer-based devices are proliferating the corporate world, and cloud-based software-as-a-service (SaaS) applications are easy to adopt. As a result, maintaining control of usersΓÇÖ application access across internal datacenters and cloud platforms is challenging.
-
-MicrosoftΓÇÖs identity solutions span on-premises and cloud-based capabilities, creating a single user identity for authentication and authorization to all resources, regardless of location. This concept is known as Hybrid Identity. There are different design and configuration options for hybrid identity using Microsoft solutions, and in some case it might be difficult to determine which combination will best meet the needs of your organization.
-
-This Hybrid Identity Design Considerations Guide will help you to understand how to design a hybrid identity solution that best fits the business and technology needs for your organization. This guide details a series of steps and tasks that you can follow to help you design a hybrid identity solution that meets your organizationΓÇÖs unique requirements. Throughout the steps and tasks, the guide will present the relevant technologies and feature options available to organizations to meet functional and service quality (such as availability, scalability, performance, manageability, and security) level requirements.
-
-Specifically, the hybrid identity design considerations guide goals are to answer the following questions:
-
-* What questions do I need to ask and answer to drive a hybrid identity-specific design for a technology or problem domain that best meets my requirements?
-* What sequence of activities should I complete to design a hybrid identity solution for the technology or problem domain?
-* What hybrid identity technology and configuration options are available to help me meet my requirements? What are the trade-offs between those options so that I can select the best option for my business?
-
-## Who is this guide intended for?
- CIO, CITO, Chief Identity Architects, Enterprise Architects, and IT Architects responsible for designing a hybrid identity solution for medium or large organizations.
-
-## How can this guide help you?
-You can use this guide to understand how to design a hybrid identity solution that is able to integrate a cloud-based identity management system with your current on-premises identity solution.
-
-The following graphic shows an example a hybrid identity solution that enables IT Admins to manage to integrate their current Windows Server Active Directory solution located on-premises with Microsoft Azure Active Directory to enable users to use Single Sign-On (SSO) across applications located in the cloud and on-premises.
-
-![Example](media/plan-hybrid-identity-design-considerations/hybridID-example.png)
-
-The above illustration is an example of a hybrid identity solution that is leveraging cloud services to integrate with on-premises capabilities in order to provide a single experience to the end-user authentication process and to facilitate IT managing those resources. Although this example can be a common scenario, every organizationΓÇÖs hybrid identity design is likely to be different than the example illustrated in Figure 1 due to different requirements.
-
-This guide provides a series of steps and tasks that you can follow to design a hybrid identity solution that meets your organizationΓÇÖs unique requirements. Throughout the following steps and tasks, the guide presents the relevant technologies and feature options available to you to meet functional and service quality level requirements for your organization.
-
-**Assumptions**: You have some experience with Windows Server, Active Directory Domain Services, and Azure Active Directory. In this document, it is assumed you are looking for how these solutions can meet your business needs on their own, or in an integrated solution.
-
-## Design considerations overview
-This document provides a set of steps and tasks that you can follow to design a hybrid identity solution that best meets your requirements. The steps are presented in an ordered sequence. Design considerations you learn in later steps may require you to change decisions you made in earlier steps, however, due to conflicting design choices. Every attempt is made to alert you to potential design conflicts throughout the document.
-
-You will arrive at the design that best meets your requirements only after iterating through the steps as many times as necessary to incorporate all of the considerations within the document.
-
-| Hybrid Identity Phase | Topic List |
-| | |
-| Determine identity requirements |[Determine business needs](plan-hybrid-identity-design-considerations-business-needs.md)<br> [Determine directory synchronization requirements](plan-hybrid-identity-design-considerations-directory-sync-requirements.md)<br> [Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md)<br> [Define a hybrid identity adoption strategy](plan-hybrid-identity-design-considerations-identity-adoption-strategy.md) |
-| Plan for enhancing data security through strong identity solution |[Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md) <br> [Determine content management requirements](plan-hybrid-identity-design-considerations-contentmgt-requirements.md)<br> [Determine access control requirements](plan-hybrid-identity-design-considerations-accesscontrol-requirements.md)<br> [Determine incident response requirements](plan-hybrid-identity-design-considerations-incident-response-requirements.md) <br> [Define data protection strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) |
-| Plan for hybrid identity lifecycle |[Determine hybrid identity management tasks](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md) <br> [Synchronization Management](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md)<br> [Determine hybrid identity management adoption strategy](plan-hybrid-identity-design-considerations-lifecycle-adoption-strategy.md) |
-
-## Next Steps
-[Determine identity requirements](plan-hybrid-identity-design-considerations-business-needs.md)
-
active-directory Plan Hybrid Identity Design Considerations Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-tools-comparison.md
- Title: 'Hybrid Identity: Directory integration tools comparison'
-description: This is page provides a comprehensive table that compares the various directory integration tools that can be used for directory integration.
------ Previously updated : 01/19/2023----
-# Hybrid Identity directory integration tools comparison
-Over the years the directory integration tools have grown and evolved.
---- [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016) is still supported, and primarily enables synchronization from or between on-premises systems. The [FIM Windows Azure AD Connector](/previous-versions/mim/dn511001(v=ws.10)) is deprecated. Customers with on-premises sources such as Notes or SAP HCM should use MIM in one of two topologies.
- - If users and groups are needed in Active Directory Domain Services (AD DS), then use MIM to populate users and groups into AD DS, and use either Azure AD Connect sync or Azure AD Connect cloud provisioning to synchronize those users and groups from AD DS to Azure AD.
- - If users and groups are not needed in AD DS, then use MIM to populate users and groups into Azure AD through the [MIM Graph connector](/microsoft-identity-manager/microsoft-identity-manager-2016-connector-graph).
-- [Azure AD Connect sync](how-to-connect-sync-whatis.md) incorporates the components and functionality previously released in DirSync and Azure AD Sync, for synchronizing between AD DS forests and Azure AD. -- [Azure AD Connect cloud provisioning](../cloud-sync/what-is-cloud-sync.md) is a new Microsoft agent for synching from AD DS to Azure AD, useful for scenarios such as merger and acquisition where the acquired company's AD forests are isolated from the parent company's AD forests.-
-To learn more about the differences between Azure AD Connect sync and Azure AD Connect cloud provisioning, see the article [What is Azure AD Connect cloud provisioning?](../cloud-sync/what-is-cloud-sync.md). For more information on deployment options with multiple HR sources or directories, then see the article [parallel and combined identity infrastructure options](../../fundamentals/parallel-identity-options.md).
-
-## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](../whatis-hybrid-identity.md).
active-directory Recommendation Migrate From Adal To Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md
Previously updated : 05/26/2023 Last updated : 08/10/2023
This article covers the recommendation to migrate from the Azure Active Director
The Azure Active Directory Authentication Library (ADAL) is currently slated for end-of-support on June 30, 2023. We recommend that customers migrate to Microsoft Authentication Libraries (MSAL), which replaces ADAL.
-This recommendation shows up if your tenant has applications that still use ADAL.
+This recommendation shows up if your tenant has applications that still use ADAL. The service marks any application in your tenant that makes a token request from the ADAL as an ADAL application. Applications that use both ADAL and MSAL are marked as ADAL applications.
+
+When an application is identified as an ADAL application, each day the recommendation looks back 30 days for any new ADAL requests from applications within the tenant. If an ADAL recommendation doesn't send any new ADAL requests for 30 days, the recommendation is marked as completed. When all applications are completed, the recommendation status changes to completed. If a new ADAL request is detected for an application that was completed, the status changes back to active.
## Value
Existing apps that use ADAL will continue to work after the end-of-support date.
## Action plan
-The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically.
+The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK.
-### Identify your apps in the Azure portal
+### [Azure portal](#tab/Azure-portal)
There are four steps to identifying and updating your apps in the Azure portal. The following steps are covered in detail in the [List all apps using ADAL](../develop/howto-get-list-of-all-active-directory-auth-library-apps.md) article.
There are four steps to identifying and updating your apps in the Azure portal.
- For example, the steps for .NET and Python applications have separate instructions. - For a full list of instructions for each scenario, see [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal).
-### Identify your apps with the Microsoft Graph API
+### [Microsoft Graph API](#tab/Microsoft-Graph-API)
You can use Microsoft Graph to identify apps that need to be migrated to MSAL. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations).
df.onecloud.azure-test.net/#view/Microsoft_AAD_RegisteredApps/ApplicationMenuBla
} ```
-### Identify your apps with Microsoft Graph PowerShell SDK
+### [Microsoft Graph PowerShell SDK](#tab/Microsoft-Graph-PowerShell-SDK)
You can run the following set of commands in Windows PowerShell. These commands use the [Microsoft Graph PowerShell SDK](/graph/powershell/installation) to get a list of all applications in your tenant that use ADAL.
You can run the following set of commands in Windows PowerShell. These commands
1. Update the code for your apps using the instructions in [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal). ++
+## Frequently asked questions
+
+### Why does it take 30 days to change the status to completed?
+
+To reduce false positives, the service uses a 30 day window for ADAL requests. This way, the service can go several days without an ADAL request and not be falsely marked as completed.
+
+### How were ADAL applications identified before the recommendation was released?
+
+The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) is an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook does not capture Service Principal sign-ins, while the recommendation does.
+
+### Why is the number of ADAL applications different in the workbook and the recommendation?
+
+Because the recommendation captures Service Principal sign-ins and the workbook doesn't, the recommendation may show more ADAL applications.
+
+### How do I identify the owner of an application in my tenant?
+
+You can locate owner from the recommendation details. Select the resource, which takes you to the application details. Select **Owners** from the navigation menu.
+
+### Can the status change from *completed* to *active*?
+
+Yes. If an application was marked as completed - so no ADAL requests were made during the 30 day window - that application would be marked as complete. If the service detects a new ADAL request, the status changes back to *active*.
+ ## Next steps - [Review the Azure AD recommendations overview](overview-recommendations.md)
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md
$group = New-MgGroup -DisplayName "Contoso_Helpdesk_Administrators" -Description
### Get the role definition you want to assign
-Use the [Get-MgRoleManagementDirectoryRoleDefinition](/powershell/module/microsoft.graph.devicemanagement.enrolment/get-mgrolemanagementdirectoryroledefinition?branch=main) command to get a role definition.
+Use the [Get-MgRoleManagementDirectoryRoleDefinition](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroledefinition?view=graph-powershell-1.0) command to get a role definition.
```powershell $roleDefinition = Get-MgRoleManagementDirectoryRoleDefinition -Filter "displayName eq 'Helpdesk Administrator'"
$roleDefinition = Get-MgRoleManagementDirectoryRoleDefinition -Filter "displayNa
### Create a role assignment
-Use the [New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.devicemanagement.enrolment/new-mgrolemanagementdirectoryroleassignment?branch=main) command to assign the role.
+Use the [New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.identity.governance/new-mgrolemanagementdirectoryroleassignment?view=graph-powershell-1.0) command to assign the role.
```powershell $roleAssignment = New-MgRoleManagementDirectoryRoleAssignment -DirectoryScopeId '/' -RoleDefinitionId $roleDefinition.Id -PrincipalId $group.Id
active-directory Airbase Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airbase-provisioning-tutorial.md
Title: 'Tutorial: Configure Airbase for automatic user provisioning with Azure Active Directory'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to Airbase.
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Airbase.
writer: twimmers
# Tutorial: Configure Airbase for automatic user provisioning
-This tutorial describes the steps you need to perform in both Airbase and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Airbase](https://www.airbase.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Airbase and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Airbase](https://www.airbase.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Supported capabilities
The scenario outlined in this tutorial assumes that you already have the followi
* Determine what data to [map between Azure AD and Airbase](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Airbase to support provisioning with Azure AD
-Contact Airbase support to configure Airbase to support provisioning with Azure AD.
+
+1. Log in to Airbase portal.
+1. Navigate to the Users section.
+1. Click Sync with HRIS.
+
+ ![Screenshot of choosing Azure from People - Users page.](media/airbase-provisioning-tutorial/connect-hris.png)
+
+1. Select Azure AD from the list of HRIS.
+1. Make a note of the Base URL and API Token.
+
+ ![Screenshot of tenant url and token.](media/airbase-provisioning-tutorial/generate-token.png)
+
+1. Use these values in Step 5.5.
## Step 3. Add Airbase from the Azure AD application gallery
active-directory Hoxhunt Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hoxhunt-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
2. In the applications list, select **Hoxhunt**.
- ![The Hoxhunt link in the Applications list](common/all-applications.png)
+ ![Screenshot of the Hoxhunt link in the Applications list.](common/all-applications.png)
3. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
4. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
5. Under the **Admin Credentials** section, input your Hoxhunt Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Hoxhunt. If the connection fails, ensure your Hoxhunt account has Admin permissions and try again.
This section guides you through the steps to configure the Azure AD provisioning
6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
7. Select **Save**.
This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Hoxhunt in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Hoxhunt for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Hoxhunt API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|
- ||||
- |userName|String|&check;|
- |emails[type eq "work"].value|String|
- |active|Boolean|
- |name.givenName|String|
- |name.familyName|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
- |addresses[type eq "work"].country|String|
+ |Attribute|Type|Supported for filtering|Required by Hoxhunt
+ ||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String||&check;
+ |active|Boolean||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |addresses[type eq "work"].country|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |preferredLanguage|String||
+ |addresses[type eq "work"].locality|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
+ 10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 11. To enable the Azure AD provisioning service for Hoxhunt, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
12. Define the users and/or groups that you would like to provision to Hoxhunt by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
13. When you are ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
Once you've configured provisioning, use the following resources to monitor your
* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## Change Log
-* 04/20/2021 - Added support for "preferredLanguage" and enterprise extension attribute "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division".
+* 04/20/2021 - Added support for core user attribute **preferredLanguage** and enterprise extension attribute **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division**.
+* 08/08/2023 - Added support for core user attribute **addresses[type eq "work"].locality|String** and enterprise extension attribute **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager**.
## Additional resources
active-directory Sap Cloud Platform Identity Authentication Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md
Title: 'Tutorial: Configure SAP Business Technology Platform Identity Authentication for automatic user provisioning with Azure Active Directory'
-description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to SAP Business Technology Platform Identity Authentication.
+ Title: 'Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning with Microsoft Entra ID'
+description: Learn how to configure Microsoft Entra ID to automatically provision and de-provision user accounts to SAP Cloud Identity Services.
writer: twimmers
Last updated 05/23/2023
-# Tutorial: Configure SAP Business Technology Platform Identity Authentication for automatic user provisioning
+# Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in SAP Business Technology Platform Identity Authentication and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users to SAP Business Technology Platform Identity Authentication.
+The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Identity Services and Microsoft Entra ID (Azure AD) to configure Microsoft Entra ID to automatically provision and de-provision users to SAP Cloud Identity Services.
> [!NOTE]
-> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+> This tutorial describes a connector built on top of the Microsoft Entra ID User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
> ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* An Azure AD tenant
-* [A SAP Business Technology Platform Identity Authentication tenant](https://www.sap.com/products/cloud-platform.html)
-* A user account in SAP Business Technology Platform Identity Authentication with Admin permissions.
+* A Microsoft Entra ID tenant
+* [A Cloud Identity Services tenant](https://www.sap.com/products/cloud-platform.html)
+* A user account in SAP Cloud Identity Services with Admin permissions.
> [!NOTE]
-> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+> This integration is also available to use from Microsoft Entra ID US Government Cloud environment. You can find this application in the Microsoft Entra ID US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
-## Assigning users to SAP Business Technology Platform Identity Authentication
+## Assigning users to SAP Cloud Identity Services
-Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users that have been assigned to an application in Azure AD are synchronized.
+Microsoft Entra ID uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users that have been assigned to an application in Microsoft Entra ID are synchronized.
-Before configuring and enabling automatic user provisioning, you should decide which users in Azure AD need access to SAP Business Technology Platform Identity Authentication. Once decided, you can assign these users to SAP Business Technology Platform Identity Authentication by following the instructions here:
+Before configuring and enabling automatic user provisioning, you should decide which users in Microsoft Entra ID need access to SAP Cloud Identity Services. Once decided, you can assign these users to SAP Cloud Identity Services by following the instructions here:
* [Assign a user to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md)
-## Important tips for assigning users to SAP Business Technology Platform Identity Authentication
+## Important tips for assigning users to SAP Cloud Identity Services
-* It is recommended that a single Azure AD user is assigned to SAP Business Technology Platform Identity Authentication to test the automatic user provisioning configuration. Additional users may be assigned later.
+* It is recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. Additional users may be assigned later.
-* When assigning a user to SAP Business Technology Platform Identity Authentication, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
+* When assigning a user to SAP Cloud Identity Services, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
-## Setup SAP Business Technology Platform Identity Authentication for provisioning
+## Set up SAP Cloud Identity Services for provisioning
-1. Sign in to your [SAP Business Technology Platform Identity Authentication Admin Console](https://sapmsftintegration.accounts.ondemand.com/admin). Navigate to **Users & Authorizations > Administrators**.
+1. Sign in to your [SAP Cloud Identity Services Admin Console](https://sapmsftintegration.accounts.ondemand.com/admin). Navigate to **Users & Authorizations > Administrators**.
- ![SAP Business Technology Platform Identity Authentication Admin Console](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/adminconsole.png)
+ ![Screenshot of the SAP Cloud Identity Services Admin Console.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/adminconsole.png)
-2. Press the **+Add** button on the left hand panel in order to add a new administrator to the list. Choose **Add System** and enter the name of the system.
+1. Press the **+Add** button on the left hand panel in order to add a new administrator to the list. Choose **Add System** and enter the name of the system.
-> [!NOTE]
-> The administrator user in SAP Business Technology Platform Identity Authentication must be of type **System**. Creating a normal administrator user can lead to *unauthorized* errors while provisioning.
+ > [!NOTE]
+ > The administrator user in SAP Cloud Identity Services must be of type **System**. Creating a normal administrator user can lead to *unauthorized* errors while provisioning.
-3. Under Configure Authorizations, switch on the toggle button against **Manage Users**.
+1. Under Configure Authorizations, switch on the toggle button against **Manage Users**.
- ![SAP Business Technology Platform Identity Authentication Add SCIM](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/configurationauth.png)
+ ![Screenshot of the SAP Cloud Identity Services Add SCIM.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/configurationauth.png)
-4. You will receive an email to activate your account and set a password for **SAP Business Technology Platform Identity Authentication Service**.
+1. You will receive an email to activate your account and set a password for **SAP Cloud Identity Services Service**.
-4. Copy the **User ID** and **Password**. These values will be entered in the Admin Username and Admin Password fields respectively in the Provisioning tab of your SAP Business Technology Platform Identity Authentication application in the Azure portal.
+1. Copy the **User ID** and **Password**. These values will be entered in the Admin Username and Admin Password fields respectively in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal.
-## Add SAP Business Technology Platform Identity Authentication from the gallery
+## Add SAP Cloud Identity Services from the gallery
-Before configuring SAP Business Technology Platform Identity Authentication for automatic user provisioning with Azure AD, you need to add SAP Business Technology Platform Identity Authentication from the Azure AD application gallery to your list of managed SaaS applications.
+Before configuring SAP Cloud Identity Services for automatic user provisioning with Microsoft Entra ID, you need to add SAP Cloud Identity Services from the Microsoft Entra ID application gallery to your list of managed SaaS applications.
-**To add SAP Business Technology Platform Identity Authentication from the Azure AD application gallery, perform the following steps:**
+**To add SAP Cloud Identity Services from the Microsoft Entra ID application gallery, perform the following steps:**
-1. In the **[Azure portal](https://portal.azure.com)**, in the left navigation panel, select **Azure Active Directory**.
+1. In the **[Azure portal](https://portal.azure.com)**, in the left navigation panel, select **Microsoft Entra ID**.
- ![The Azure Active Directory button](common/select-azuread.png)
+ ![Screenshot of the Microsoft Entra ID button.](common/select-azuread.png)
-2. Go to **Enterprise applications**, and then select **All applications**.
+1. Go to **Enterprise applications**, and then select **All applications**.
- ![The Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of the Enterprise applications blade.](common/enterprise-applications.png)
-3. To add a new application, select the **New application** button at the top of the pane.
+1. To add a new application, select the **New application** button at the top of the pane.
- ![The New application button](common/add-new-app.png)
+ ![Screenshot of the New application button.](common/add-new-app.png)
-4. In the search box, enter **SAP Business Technology Platform Identity Authentication**, select **SAP Business Technology Platform Identity Authentication** in the results panel, and then click the **Add** button to add the application.
+1. In the search box, enter **SAP Cloud Identity Services**, select **SAP Cloud Identity Services** in the results panel, and then click the **Add** button to add the application.
- ![SAP Business Technology Platform Identity Authentication in the results list](common/search-new-app.png)
+ ![Screenshot of the SAP Cloud Identity Services in the results list.](common/search-new-app.png)
-## Configuring automatic user provisioning to SAP Business Technology Platform Identity Authentication
+## Configuring automatic user provisioning to SAP Cloud Identity Services
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in SAP Business Technology Platform Identity Authentication based on users assignments in Azure AD.
+This section guides you through the steps to configure the Microsoft Entra ID provisioning service to create, update, and disable users in SAP Cloud Identity Services based on users assignments in Microsoft Entra ID.
> [!TIP]
-> You may also choose to enable SAML-based single sign-on for SAP Business Technology Platform Identity Authentication, following the instructions provided in the [SAP Business Technology Platform Identity Authentication Single sign-on tutorial](./sap-hana-cloud-platform-identity-authentication-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other
+> You may also choose to enable SAML-based single sign-on for SAP Cloud Identity Services, following the instructions provided in the [SAP Cloud Identity Services Single sign-on tutorial](./sap-hana-cloud-platform-identity-authentication-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features complement each other
-### To configure automatic user provisioning for SAP Business Technology Platform Identity Authentication in Azure AD:
+### To configure automatic user provisioning for SAP Cloud Identity Services in Microsoft Entra ID:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
-2. In the applications list, select **SAP Business Technology Platform Identity Authentication**.
+1. In the applications list, select **SAP Cloud Identity Services**.
- ![The SAP Business Technology Platform Identity Authentication link in the Applications list](common/all-applications.png)
+ ![Screenshot of the SAP Cloud Identity Services link in the Applications list.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input `https://<tenantID>.accounts.ondemand.com/service/scim ` in **Tenant URL**. Input the **User ID** and **Password** values retrieved earlier in **Admin Username** and **Admin Password** respectively. Click **Test Connection** to ensure Azure AD can connect to SAP Business Technology Platform Identity Authentication. If the connection fails, ensure your SAP Business Technology Platform Identity Authentication account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input `https://<tenantID>.accounts.ondemand.com/service/scim ` in **Tenant URL**. Input the **User ID** and **Password** values retrieved earlier in **Admin Username** and **Admin Password** respectively. Click **Test Connection** to ensure Microsoft Entra ID can connect to SAP Cloud Identity Services. If the connection fails, ensure your SAP Cloud Identity Services account has Admin permissions and try again.
- ![Tenant URL + Token](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/testconnection.png)
+ ![Screenshot of the Tenant URL and Token.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/testconnection.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of the Notification Email.](common/provisioning-notification-email.png)
-7. Click **Save**.
+1. Click **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to SAP Business Technology Platform Identity Authentication**.
+1. Under the **Mappings** section, select **Synchronize Microsoft Entra ID Users to SAP Cloud Identity Services**.
- ![SAP Business Technology Platform Identity Authentication User Mappings](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/mapping.png)
+ ![Screenshot of the SAP Cloud Identity Services User Mappings.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/mapping.png)
-9. Review the user attributes that are synchronized from Azure AD to SAP Business Technology Platform Identity Authentication in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Business Technology Platform Identity Authentication for update operations. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Microsoft Entra ID to SAP Cloud Identity Services in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Cloud Identity Services for update operations. Select the **Save** button to commit any changes.
- ![SAP Business Technology Platform Identity Authentication User Attributes](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/userattributes.png)
+ ![Screenshot of the SAP Business Technology Platform Identity Authentication User Attributes.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/userattributes.png)
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-11. To enable the Azure AD provisioning service for SAP Business Technology Platform Identity Authentication, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Microsoft Entra ID provisioning service for SAP Cloud Identity Services, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
-12. Define the users that you would like to provision to SAP Business Technology Platform Identity Authentication by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users that you would like to provision to SAP Cloud Identity Services by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
-13. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization of all users defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on SAP Business Technology Platform Identity Authentication.
+This operation starts the initial synchronization of all users defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Microsoft Entra ID provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Microsoft Entra ID provisioning service on SAP Cloud Identity Services.
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+For more information on how to read the Microsoft Entra ID provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
## Connector limitations
-* SAP Business Technology Platform Identity Authentication's SCIM endpoint requires certain attributes to be of specific format. You can know more about these attributes and their specific format [here](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/b10fc6a9a37c488a82ce7489b1fab64c.html#).
+* SAP Cloud Identity Services's SCIM endpoint requires certain attributes to be of specific format. You can know more about these attributes and their specific format [here](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/b10fc6a9a37c488a82ce7489b1fab64c.html#).
## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md)
## Next steps
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
Previously updated : 01/25/2023 Last updated : 08/11/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
| emailaddress | user.userprincipalname | | email | user.userprincipalname |
- > [!NOTE]
- > In order to set up the service provider (SP) configuration, you must click on **Expand** next to **Advanced Options** in the SAML configuration page. In the **Service Provider Issuer** box, enter the workspace URL. The default is slack.com.
- 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Choose how the SAML response from your IDP is signed from the two options.
+ > [!NOTE]
+ > In order to set up the service provider (SP) configuration, you must click on **Expand** next to **Advanced Options** in the SAML configuration page. In the **Service Provider Issuer** box, enter the workspace URL. The default is slack.com.
+ 1. Under **Settings**, decide if members can edit their profile information (like their email or display name) after SSO is enabled. You can also choose whether SSO is required, partially required or optional. ![Screenshot of Configure Save configuration single sign-on On App Side.](./media/slack-tutorial/save-configuration-button.png)
active-directory Successfactors Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/successfactors-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
- `https://<companyname>.sapsf.cn/<companyname>` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Contact [SuccessFactors Client support team](https://www.sap.com/support.html) to get these values.
+ > These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Contact [SuccessFactors Client support team](https://www.sap.com/services-support.html) to get these values.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
To enable Azure AD users to sign in to SuccessFactors, they must be provisioned into SuccessFactors. In the case of SuccessFactors, provisioning is a manual task.
-To get users created in SuccessFactors, you need to contact the [SuccessFactors support team](https://www.sap.com/support.html).
+To get users created in SuccessFactors, you need to contact the [SuccessFactors support team](https://www.sap.com/services-support.html).
## Test SSO
active-directory Xledger Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xledger-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Xledger](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Xledger to support provisioning with Azure AD
-Contact Xledger support to configure Xledger to support provisioning with Azure AD.
+
+1. Sign into **Xledger** with role of Domain Administrator (or similar) and navigate to **Administration > System Access > API Access Tokens**.
+
+1. Generate a Secret Token and take note of it
+
+ ![Screenshot of API Access Tokens (new token).](media/xledger-provisioning-tutorial/generate-new-token.png)
+
+1. Take note of the Tenant URL
+
+ ![Screenshot of API Access Token (api url).](media/xledger-provisioning-tutorial/generate-new-token-api-url.png)
+
+These values will be used in the Provisioning tab of your Xledger application in the Azure portal. (Step 5)
## Step 3. Add Xledger from the Azure AD application gallery
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
# Configure Azure Active Directory to meet CMMC Level 2
-Azure Active Directory helps meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in [CMMC V2.0 level 2](https://cmmc-coe.org/maturity-level-two/), it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes.
+Azure Active Directory helps meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in CMMC V2.0 level 2, it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes.
In CMMC Level 2, there are 13 domains that have one or more practices related to identity:
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
# Configure CMMC Level 2 Identification and Authentication (IA) controls
-Azure Active Directory helps you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To complete other configurations or processes to be compliant with [CMMC V2.0 level 2](https://cmmc-coe.org/maturity-level-two/)requirements, is the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD).
+Azure Active Directory helps you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To complete other configurations or processes to be compliant with CMMC V2.0 level 2 requirements, is the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD).
CMMC Level 2 has 13 domains that have one or more practices related to identity. The domains are:
The following table provides a list of practice statement and objectives, and Az
* [Configure Azure Active Directory for CMMC compliance](configure-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-for-fedramp-high-impact.md
There are multiple paths towards FedRAMP authorization. You can reuse the existi
## Scope of guidance
-The FedRAMP high baseline is made up of 421 controls and control enhancements from [NIST 800-53 Security Controls Catalog Revision 4](https://csrc.nist.gov/publications/detail/sp/800-53/rev-4/final). Where applicable, we included clarifying information from the [800-53 Revision 5](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final). This article set covers a subset of these controls that are related to identity, and which you must configure.
+The FedRAMP high baseline is made up of 421 controls and control enhancements from [NIST 800-53 Security Controls Catalog Revision 4](https://csrc.nist.gov/pubs/itlb/2015/01/release-of-nist-special-publication-80053a-revisio/final). Where applicable, we included clarifying information from the [800-53 Revision 5](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final). This article set covers a subset of these controls that are related to identity, and which you must configure.
We provide prescriptive guidance to help you achieve compliance with controls you're responsible for configuring in Azure Active Directory (Azure AD). To fully address some identity control requirements, you might need to use other systems. Other systems might include a security information and event management tool, such as Microsoft Sentinel. If you're using Azure services outside of Azure Active Directory, there will be other controls you need to consider, and you can use the capabilities Azure already has in place to meet the controls.
active-directory Hipaa Configure For Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-configure-for-compliance.md
The remaining articles in this series provide guidance and links to resources, o
* [HHS Zero Trust in Healthcare pdf](https://www.hhs.gov/sites/default/files/zero-trust.pdf)
-* [Combined regulation text](https://www.hhs.gov/ocr/privacy/hipaa/administrative/combined/https://docsupdatetracker.net/index.html?language=es) of all HIPAA Administrative Simplification Regulations found at 45 CFR 160, 162, and 164
+* [Combined regulation text](https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/combined-regulation-text/https://docsupdatetracker.net/index.html) of all HIPAA Administrative Simplification Regulations found at 45 CFR 160, 162, and 164
* [Code of Federal Regulations (CFR) Title 45](https://www.ecfr.gov/current/title-45) describing the public welfare portion of the regulation
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-overview.md
Previously updated : 03/08/2023 Last updated : 08/08/2023 -+ #Customer intent: As a developer, I want workload identities so I can authenticate with Azure AD and access Azure AD protected resources.
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-considerations.md
Previously updated : 04/07/2023 Last updated : 08/11/2023
ai-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
Use this tutorial to detect anomalies among multiple variables in Azure Synapse Analytics in very large datasets and databases. This solution is perfect for scenarios like equipment predictive maintenance. The underlying power comes from the integration with [SynapseML](https://microsoft.github.io/SynapseML/), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. It can be installed and used on any Spark 3 infrastructure including your **local machine**, **Databricks**, **Synapse Analytics**, and others.
-For more information, see [SynapseML estimator for Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
- In this tutorial, you'll learn how to: > [!div class="checklist"]
If you have the need to run training code and inference code in separate noteboo
### About Anomaly Detector * Learn about [what is Multivariate Anomaly Detector](../overview.md).
-* SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
-* Recipe: [Azure AI services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Multivariate%20Anomaly%20Detection/).
* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u). ### About Synapse
ai-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/client-library.md
Previously updated : 07/04/2023 Last updated : 08/07/2023 ms.devlang: csharp, golang, java, javascript, python
ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md
monikerRange: '<=doc-intel-3.1.0'
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
-# Changelog and release history
+# SDK changelog and release history
This reference article provides a version-based description of Document Intelligence feature and capability releases, changes, updates, and enhancements.
-#### Document Intelligence SDK April 2023 preview release
+#### August 2023 (GA) release
+
+### [**C#**](#tab/csharp)
+
+* **Version 4.1.0 (2023-08-10)**
+* **Targets API version2023-07-31 by default**
+* **Version 2023-02-28-preview is no longer supported**
+* [**Breaking changes**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#breaking-changes-1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* **4.1.0 (2023-08-10)**
+* **Targets API version 2023-07-31 by default**
+* **Version 2023-02-28-preview is no longer supported**
+* [**Breaking changes**](https://github.com/Azure/azure-sdk-for-jav#breaking-changes)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples#readme)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 5.0.0 (2023-08-08)**
+* **Targets API version 2023-07-31 by default**
+* **Version 2023-02-28-preview is no longer supported**
+* [**Breaking changes**](https://github.com/witemple-msft/azure-sdk-for-js/blob/ai-form-recognizer/5.0.0-release/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#breaking-changes)
+
+[**Changelog/Release History**](https://github.com/witemple-msft/azure-sdk-for-js/blob/ai-form-recognizer/5.0.0-release/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
+
+[**ReadMe**](https://github.com/witemple-msft/azure-sdk-for-js/blob/ai-form-recognizer/5.0.0-release/sdk/formrecognizer/ai-form-recognizer/README.md)
+
+[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/tree/ai-form-recognizer/5.0.0-release/sdk/formrecognizer/ai-form-recognizer/samples/v5)
+
+### [**Python**](#tab/python)
+
+* **Version 3.3.0 (2023-08-08)**
+* **Targets API version 2023-07-31 by default**
+* **Version 2023-02-28-preview is no longer supported**
+* [**Breaking changes**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#breaking-changes)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.3.0/sdk/formrecognizer/azure-ai-formrecognizer/samples)
+++
+#### April 2023 (preview) release
This release includes the following updates:
This release includes the following updates:
* **Targets 2023-02-28-preview by default** * **No breaking changes**
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#410-beta1-2023-04-13)
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md) [**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
This release includes the following updates:
* **Targets 2023-02-28-preview by default** * **No breaking changes**
-[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1)
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#410-beta1-2023-04-12)
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1)
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples#readme)
This release includes the following updates:
* **Targets 2023-02-28-preview by default** * **No breaking changes**
-[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#410-beta1-2023-04-11)
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/README.md) [**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta)
This release includes the following updates:
* **Targets 2023-02-28-preview by default** * **No breaking changes**
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#330b1-2023-04-13)
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/README.md) [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/samples)
-#### Document Intelligence SDK September 2022 (GA) release
+#### September 2022 (GA) release
This release includes the following updates:
This release includes the following updates:
* **Version 4.0.0 GA (2022-09-08)** * **Supports REST API v3.0 and v2.0 clients**
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
+ [**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md) [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
This release includes the following updates:
* **Version 4.0.0 GA (2022-09-08)** * **Supports REST API v3.0 and v2.0 clients**
-[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+ [**Migration guide**](https://github.com/Azure/azure-sdk-for-jav) [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
This release includes the following updates:
* **Version 4.0.0 GA (2022-09-08)** * **Supports REST API v3.0 and v2.0 clients**
-[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
+ [**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md) [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
This release includes the following updates:
* **Version 3.2.0 GA (2022-09-08)** * **Supports REST API v3.0 and v2.0 clients**
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
+ [**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md) [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
This release includes the following updates:
----
-#### Document Intelligence SDK beta August 2022 preview release
+#### August 2022 (preview) release
This release includes the following updates:
This release includes the following updates:
----
-### Document Intelligence SDK beta June 2022 preview release
+### June 2022 (preview) release
This release includes the following updates:
This release includes the following updates:
[**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true) -----+
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
The task of recognizing small text from large-size documents, like engineering d
## Barcode extraction
-The Read OCR model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. Here, the `confidence` is hard-coded for the public preview (`2023-02-28`) release.
+The Read OCR model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. Here, the `confidence` is hard-coded for the API (GA) version (`2023-07-31`).
### Supported barcode types
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
Custom classification models require a minimum of five samples per class to trai
## Training a model
-Custom classification models are only available in the [v3.1 API](v3-1-migration-guide.md) starting with API version ```2023-02-28-preview```. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
+Custom classification models are only available in the [v3.1 API](v3-1-migration-guide.md) version ```2023-07-31```. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
When using the REST API, if you've organized your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classification model.
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
monikerRange: '>=doc-intel-3.0.0'
[!INCLUDE [applies to v3.1 and v3.0](includes/applies-to-v3-1-v3-0.md)] - Custom neural document models or neural models are a deep learned model type that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category: |Documents | Examples |
Neural models support documents that have the same information, but different pa
## Supported languages and locales
-1. Neural models now support added languages in the ```v3.1 and v3.0``` APIs.
+>[!NOTE]
+> Document Intelligence auto-detects language and locale data.
++
+Neural models now support added languages for the ```v3.1``` APIs.
+
+|Language| Code (optional) |
+|:--|:-:|
+|Afrikaans| `af`|
+|Albanian| `sq`|
+|Arabic|`ar`|
+|Bulgarian|`bg`|
+|Chinese (Han (Simplified variant))| `zh-Hans`|
+|Chinese (Han (Traditional variant))|`zh-Hant`|
+|Croatian|`hr`|
+|Czech|`cs`|
+|Danish|`da`|
+|Dutch|`nl`|
+|Estonian|`et`|
+|Finnish|`fi`|
+|French|`fr`|
+|German|`de`|
+|Hebrew|`he`|
+|Hindi|`hi`|
+|Hungarian|`hu`|
+|Indonesian|`id`|
+|Italian|`it`|
+|Japanese|`ja`|
+|Korean|`ko`|
+|Latvian|`lv`|
+|Lithuanian|`lt`|
+|Macedonian|`mk`|
+|Marathi|`mr`|
+|Modern Greek (1453-)|`el`|
+|Nepali (macrolanguage)|`ne`|
+|Norwegian|`no`|
+|Panjabi|`pa`|
+|Persian|`fa`|
+|Polish|`pl`|
+|Portuguese|`pt`|
+|Romanian|`rm`|
+|Russian|`ru`|
+|Slovak|`sk`|
+|Slovenian|`sl`|
+|Somali (Arabic)|`so`|
+|Somali (Latin)|`so-latn`|
+|Spanish|`es`|
+|Swahili (macrolanguage)|`sw`|
+|Swedish|`sv`|
+|Tamil|`ta`|
+|Thai|`th`|
+|Turkish|`tr`|
+|Ukrainian|`uk`|
+|Urdu|`ur`|
+|Vietnamese|`vi`|
+++
+Neural models now support added languages for the ```v3.0``` APIs.
| Languages | API version | |:--:|:--:|
Neural models support documents that have the same information, but different pa
| Spanish | `2023-07-31` (GA)| | Dutch | `2023-07-31` (GA)| + ## Tabular fields With the release of API versions **2022-06-30-preview** and later, custom neural models will support tabular fields (tables):
ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-template.md
The following table lists the supported languages for print text by the most rec
:::row::: :::column span="":::
- |Language| Code (optional) |
+ |Language| Code (optional) |
|:--|:-:|
- |Abaza|abq|
- |Abkhazian|ab|
- |Achinese|ace|
- |Acoli|ach|
- |Adangme|ada|
- |Adyghe|ady|
- |Afar|aa|
- |Afrikaans|af|
- |Akan|ak|
- |Albanian|sq|
- |Algonquin|alq|
- |Angika (Devanagari)|anp|
- |Arabic|ar|
- |Asturian|ast|
- |Asu (Tanzania)|asa|
- |Avaric|av|
- |Awadhi-Hindi (Devanagari)|awa|
- |Aymara|ay|
- |Azerbaijani (Latin)|az|
- |Bafia|ksf|
- |Bagheli|bfy|
- |Bambara|bm|
- |Bashkir|ba|
- |Basque|eu|
+ |Abaza|`abq`|
+ |Abkhazian|`ab`|
+ |Achinese|`ace`|
+ |Acoli|`ach`|
+ |Adangme|`ada`|
+ |Adyghe|`ady`|
+ |Afar|`aa`|
+ |Afrikaans|`af`|
+ |Akan|`ak`|
+ |Albanian|`sq`|
+ |Algonquin|`alq`|
+ |Angika (Devanagari)|`anp`|
+ |Arabic|`ar`|
+ |Asturian|`ast`|
+ |Asu (Tanzania)|`asa`|
+ |Avaric|`av`|
+ |Awadhi-Hindi (Devanagari)|`awa`|
+ |Aymara|`ay`|
+ |Azerbaijani (Latin)|`az`|
+ |Bafia|`ksf`|
+ |Bagheli|`bfy`|
+ |Bambara|`bm`|
+ |Bashkir|`ba`|
+ |Basque|`eu`|
|Belarusian (Cyrillic)|be, be-cyrl| |Belarusian (Latin)|be, be-latn|
- |Bemba (Zambia)|bem|
- |Bena (Tanzania)|bez|
- |Bhojpuri-Hindi (Devanagari)|bho|
- |Bikol|bik|
- |Bini|bin|
- |Bislama|bi|
- |Bodo (Devanagari)|brx|
- |Bosnian (Latin)|bs|
- |Brajbha|bra|
- |Breton|br|
- |Bulgarian|bg|
- |Bundeli|bns|
- |Buryat (Cyrillic)|bua|
- |Catalan|ca|
- |Cebuano|ceb|
- |Chamling|rab|
- |Chamorro|ch|
- |Chechen|ce|
- |Chhattisgarhi (Devanagari)|hne|
- |Chiga|cgg|
- |Chinese Simplified|zh-Hans|
- |Chinese Traditional|zh-Hant|
- |Choctaw|cho|
- |Chukot|ckt|
- |Chuvash|cv|
- |Cornish|kw|
- |Corsican|co|
- |Cree|cr|
- |Creek|mus|
- |Crimean Tatar (Latin)|crh|
- |Croatian|hr|
- |Crow|cro|
- |Czech|cs|
- |Danish|da|
- |Dargwa|dar|
- |Dari|prs|
- |Dhimal (Devanagari)|dhi|
- |Dogri (Devanagari)|doi|
- |Duala|dua|
- |Dungan|dng|
- |Dutch|nl|
- |Efik|efi|
- |English|en|
- |Erzya (Cyrillic)|myv|
- |Estonian|et|
- |Faroese|fo|
- |Fijian|fj|
- |Filipino|fil|
- |Finnish|fi|
+ |Bemba (Zambia)|`bem`|
+ |Bena (Tanzania)|`bez`|
+ |Bhojpuri-Hindi (Devanagari)|`bho`|
+ |Bikol|`bik`|
+ |Bini|`bin`|
+ |Bislama|`bi`|
+ |Bodo (Devanagari)|`brx`|
+ |Bosnian (Latin)|`bs`|
+ |Brajbha|`bra`|
+ |Breton|`br`|
+ |Bulgarian|`bg`|
+ |Bundeli|`bns`|
+ |Buryat (Cyrillic)|`bua`|
+ |Catalan|`ca`|
+ |Cebuano|`ceb`|
+ |Chamling|`rab`|
+ |Chamorro|`ch`|
+ |Chechen|`ce`|
+ |Chhattisgarhi (Devanagari)|`hne`|
+ |Chiga|`cgg`|
+ |Chinese Simplified|`zh-Hans`|
+ |Chinese Traditional|`zh-Hant`|
+ |Choctaw|`cho`|
+ |Chukot|`ckt`|
+ |Chuvash|`cv`|
+ |Cornish|`kw`|
+ |Corsican|`co`|
+ |Cree|`cr`|
+ |Creek|`mus`|
+ |Crimean Tatar (Latin)|`crh`|
+ |Croatian|`hr`|
+ |Crow|`cro`|
+ |Czech|`cs`|
+ |Danish|`da`|
+ |Dargwa|`dar`|
+ |Dari|`prs`|
+ |Dhimal (Devanagari)|`dhi`|
+ |Dogri (Devanagari)|`doi`|
+ |Duala|`dua`|
+ |Dungan|`dng`|
+ |Dutch|`nl`|
+ |Efik|`efi`|
+ |English|`en`|
+ |Erzya (Cyrillic)|`myv`|
+ |Estonian|`et`|
+ |Faroese|`fo`|
+ |Fijian|`fj`|
+ |Filipino|`fil`|
+ |Finnish|`fi`|
:::column-end::: :::column span="":::
- |Language| Code (optional) |
+ |Language| Code (optional) |
|:--|:-:|
- |Fon|fon|
- |French|fr|
- |Friulian|fur|
- |Ga|gaa|
- |Gagauz (Latin)|gag|
- |Galician|gl|
- |Ganda|lg|
- |Gayo|gay|
- |German|de|
- |Gilbertese|gil|
- |Gondi (Devanagari)|gon|
- |Greek|el|
- |Greenlandic|kl|
- |Guarani|gn|
- |Gurung (Devanagari)|gvr|
- |Gusii|guz|
- |Haitian Creole|ht|
- |Halbi (Devanagari)|hlb|
- |Hani|hni|
- |Haryanvi|bgc|
- |Hawaiian|haw|
- |Hebrew|he|
- |Herero|hz|
- |Hiligaynon|hil|
- |Hindi|hi|
- |Hmong Daw (Latin)|mww|
- |Ho(Devanagiri)|hoc|
- |Hungarian|hu|
- |Iban|iba|
- |Icelandic|is|
- |Igbo|ig|
- |Iloko|ilo|
- |Inari Sami|smn|
- |Indonesian|id|
- |Ingush|inh|
- |Interlingua|ia|
- |Inuktitut (Latin)|iu|
- |Irish|ga|
- |Italian|it|
- |Japanese|ja|
- |Jaunsari (Devanagari)|Jns|
- |Javanese|jv|
- |Jola-Fonyi|dyo|
- |Kabardian|kbd|
- |Kabuverdianu|kea|
- |Kachin (Latin)|kac|
- |Kalenjin|kln|
- |Kalmyk|xal|
- |Kangri (Devanagari)|xnr|
- |Kanuri|kr|
- |Karachay-Balkar|krc|
+ |`Fon`|`fon`|
+ |French|`fr`|
+ |Friulian|`fur`|
+ |`Ga`|`gaa`|
+ |Gagauz (Latin)|`gag`|
+ |Galician|`gl`|
+ |Ganda|`lg`|
+ |Gayo|`gay`|
+ |German|`de`|
+ |Gilbertese|`gil`|
+ |Gondi (Devanagari)|`gon`|
+ |Greek|`el`|
+ |Greenlandic|`kl`|
+ |Guarani|`gn`|
+ |Gurung (Devanagari)|`gvr`|
+ |Gusii|`guz`|
+ |Haitian Creole|`ht`|
+ |Halbi (Devanagari)|`hlb`|
+ |Hani|`hni`|
+ |Haryanvi|`bgc`|
+ |Hawaiian|`haw`|
+ |Hebrew|`he`|
+ |Herero|`hz`|
+ |Hiligaynon|`hil`|
+ |Hindi|`hi`|
+ |Hmong Daw (Latin)|`mww`|
+ |Ho(Devanagiri)|`hoc`|
+ |Hungarian|`hu`|
+ |Iban|`iba`|
+ |Icelandic|`is`|
+ |Igbo|`ig`|
+ |Iloko|`ilo`|
+ |Inari Sami|`smn`|
+ |Indonesian|`id`|
+ |Ingush|`inh`|
+ |Interlingua|`ia`|
+ |Inuktitut (Latin)|`iu`|
+ |Irish|`ga`|
+ |Italian|`it`|
+ |Japanese|`ja`|
+ |Jaunsari (Devanagari)|`Jns`|
+ |Javanese|`jv`|
+ |Jola-Fonyi|`dyo`|
+ |Kabardian|`kbd`|
+ |Kabuverdianu|`kea`|
+ |Kachin (Latin)|`kac`|
+ |Kalenjin|`kln`|
+ |Kalmyk|`xal`|
+ |Kangri (Devanagari)|`xnr`|
+ |Kanuri|`kr`|
+ |Karachay-Balkar|`krc`|
|Kara-Kalpak (Cyrillic)|kaa-cyrl|
- |Kara-Kalpak (Latin)|kaa|
- |Kashubian|csb|
+ |Kara-Kalpak (Latin)|`kaa`|
+ |Kashubian|`csb`|
|Kazakh (Cyrillic)|kk-cyrl| |Kazakh (Latin)|kk-latn|
- |Khakas|kjh|
- |Khaling|klr|
- |Khasi|kha|
- |K'iche'|quc|
- |Kikuyu|ki|
- |Kildin Sami|sjd|
- |Kinyarwanda|rw|
- |Komi|kv|
- |Kongo|kg|
- |Korean|ko|
- |Korku|kfq|
- |Koryak|kpy|
- |Kosraean|kos|
- |Kpelle|kpe|
- |Kuanyama|kj|
- |Kumyk (Cyrillic)|kum|
+ |Khakas|`kjh`|
+ |Khaling|`klr`|
+ |Khasi|`kha`|
+ |K'iche'|`quc`|
+ |Kikuyu|`ki`|
+ |Kildin Sami|`sjd`|
+ |Kinyarwanda|`rw`|
+ |Komi|`kv`|
+ |Kongo|`kg`|
+ |Korean|`ko`|
+ |Korku|`kfq`|
+ |Koryak|`kpy`|
+ |Kosraean|`kos`|
+ |Kpelle|`kpe`|
+ |Kuanyama|`kj`|
+ |Kumyk (Cyrillic)|`kum`|
|Kurdish (Arabic)|ku-arab| |Kurdish (Latin)|ku-latn| :::column-end::: :::column span="":::
- |Language| Code (optional) |
+ |Language| Code (optional) |
|:--|:-:|
- |Kurukh (Devanagari)|kru|
- |Kyrgyz (Cyrillic)|ky|
- |Lak|lbe|
- |Lakota|lkt|
- |Latin|la|
- |Latvian|lv|
- |Lezghian|lex|
- |Lingala|ln|
- |Lithuanian|lt|
- |Lower Sorbian|dsb|
- |Lozi|loz|
- |Lule Sami|smj|
- |Luo (Kenya and Tanzania)|luo|
- |Luxembourgish|lb|
- |Luyia|luy|
- |Macedonian|mk|
- |Machame|jmc|
- |Madurese|mad|
- |Mahasu Pahari (Devanagari)|bfz|
- |Makhuwa-Meetto|mgh|
- |Makonde|kde|
- |Malagasy|mg|
- |Malay (Latin)|ms|
- |Maltese|mt|
- |Malto (Devanagari)|kmj|
- |Mandinka|mnk|
- |Manx|gv|
- |Maori|mi|
- |Mapudungun|arn|
- |Marathi|mr|
- |Mari (Russia)|chm|
- |Masai|mas|
- |Mende (Sierra Leone)|men|
- |Meru|mer|
- |Meta'|mgo|
- |Minangkabau|min|
- |Mohawk|moh|
- |Mongolian (Cyrillic)|mn|
- |Mongondow|mog|
+ |Kurukh (Devanagari)|`kru`|
+ |Kyrgyz (Cyrillic)|`ky`|
+ |`Lak`|`lbe`|
+ |Lakota|`lkt`|
+ |Latin|`la`|
+ |Latvian|`lv`|
+ |Lezghian|`lex`|
+ |Lingala|`ln`|
+ |Lithuanian|`lt`|
+ |Lower Sorbian|`dsb`|
+ |Lozi|`loz`|
+ |Lule Sami|`smj`|
+ |Luo (Kenya and Tanzania)|`luo`|
+ |Luxembourgish|`lb`|
+ |Luyia|`luy`|
+ |Macedonian|`mk`|
+ |Machame|`jmc`|
+ |Madurese|`mad`|
+ |Mahasu Pahari (Devanagari)|`bfz`|
+ |Makhuwa-Meetto|`mgh`|
+ |Makonde|`kde`|
+ |Malagasy|`mg`|
+ |Malay (Latin)|`ms`|
+ |Maltese|`mt`|
+ |Malto (Devanagari)|`kmj`|
+ |Mandinka|`mnk`|
+ |Manx|`gv`|
+ |Maori|`mi`|
+ |Mapudungun|`arn`|
+ |Marathi|`mr`|
+ |Mari (Russia)|`chm`|
+ |Masai|`mas`|
+ |Mende (Sierra Leone)|`men`|
+ |Meru|`mer`|
+ |Meta'|`mgo`|
+ |Minangkabau|`min`|
+ |Mohawk|`moh`|
+ |Mongolian (Cyrillic)|`mn`|
+ |Mongondow|`mog`|
|Montenegrin (Cyrillic)|cnr-cyrl| |Montenegrin (Latin)|cnr-latn|
- |Morisyen|mfe|
- |Mundang|mua|
- |Nahuatl|nah|
- |Navajo|nv|
- |Ndonga|ng|
- |Neapolitan|nap|
- |Nepali|ne|
- |Ngomba|jgo|
- |Niuean|niu|
- |Nogay|nog|
- |North Ndebele|nd|
- |Northern Sami (Latin)|sme|
- |Norwegian|no|
- |Nyanja|ny|
- |Nyankole|nyn|
- |Nzima|nzi|
- |Occitan|oc|
- |Ojibwa|oj|
- |Oromo|om|
- |Ossetic|os|
- |Pampanga|pam|
- |Pangasinan|pag|
- |Papiamento|pap|
- |Pashto|ps|
- |Pedi|nso|
- |Persian|fa|
- |Polish|pl|
- |Portuguese|pt|
- |Punjabi (Arabic)|pa|
- |Quechua|qu|
- |Ripuarian|ksh|
- |Romanian|ro|
- |Romansh|rm|
- |Rundi|rn|
- |Russian|ru|
+ |Morisyen|`mfe`|
+ |Mundang|`mua`|
+ |Nahuatl|`nah`|
+ |Navajo|`nv`|
+ |Ndonga|`ng`|
+ |Neapolitan|`nap`|
+ |Nepali|`ne`|
+ |Ngomba|`jgo`|
+ |Niuean|`niu`|
+ |Nogay|`nog`|
+ |North Ndebele|`nd`|
+ |Northern Sami (Latin)|`sme`|
+ |Norwegian|`no`|
+ |Nyanja|`ny`|
+ |Nyankole|`nyn`|
+ |Nzima|`nzi`|
+ |Occitan|`oc`|
+ |Ojibwa|`oj`|
+ |Oromo|`om`|
+ |Ossetic|`os`|
+ |Pampanga|`pam`|
+ |Pangasinan|`pag`|
+ |Papiamento|`pap`|
+ |Pashto|`ps`|
+ |Pedi|`nso`|
+ |Persian|`fa`|
+ |Polish|`pl`|
+ |Portuguese|`pt`|
+ |Punjabi (Arabic)|`pa`|
+ |Quechua|`qu`|
+ |Ripuarian|`ksh`|
+ |Romanian|`ro`|
+ |Romansh|`rm`|
+ |Rundi|`rn`|
+ |Russian|`ru`|
:::column-end::: :::column span="":::
- |Language| Code (optional) |
+ |Language| Code (optional) |
|:--|:-:|
- |Rwa|rwk|
- |Sadri (Devanagari)|sck|
- |Samburu|saq|
- |Samoan (Latin)|sm|
- |Sango|sg|
- |Sangu (Gabon)|snq|
- |Sanskrit (Devanagari)|sa|
- |Santali(Devanagiri)|sat|
- |Scots|sco|
- |Scottish Gaelic|gd|
- |Sena|seh|
+ |`Rwa`|`rwk`|
+ |Sadri (Devanagari)|`sck`|
+ |Samburu|`saq`|
+ |Samoan (Latin)|`sm`|
+ |Sango|`sg`|
+ |Sangu (Gabon)|`snq`|
+ |Sanskrit (Devanagari)|`sa`|
+ |Santali(Devanagiri)|`sat`|
+ |Scots|`sco`|
+ |Scottish Gaelic|`gd`|
+ |Sena|`seh`|
|Serbian (Cyrillic)|sr-cyrl| |Serbian (Latin)|sr, sr-latn|
- |Shambala|ksb|
- |Sherpa (Devanagari)|xsr|
- |Shona|sn|
- |Siksika|bla|
- |Sirmauri (Devanagari)|srx|
- |Skolt Sami|sms|
- |Slovak|sk|
- |Slovenian|sl|
- |Soga|xog|
- |Somali (Arabic)|so|
- |Somali (Latin)|so-latn|
- |Songhai|son|
- |South Ndebele|nr|
- |Southern Altai|alt|
- |Southern Sami|sma|
- |Southern Sotho|st|
- |Spanish|es|
- |Sundanese|su|
- |Swahili (Latin)|sw|
- |Swati|ss|
- |Swedish|sv|
- |Tabassaran|tab|
- |Tachelhit|shi|
- |Tahitian|ty|
- |Taita|dav|
- |Tajik (Cyrillic)|tg|
- |Tamil|ta|
+ |Shambala|`ksb`|
+ |Sherpa (Devanagari)|`xsr`|
+ |Shona|`sn`|
+ |Siksika|`bla`|
+ |Sirmauri (Devanagari)|`srx`|
+ |Skolt Sami|`sms`|
+ |Slovak|`sk`|
+ |Slovenian|`sl`|
+ |Soga|`xog`|
+ |Somali (Arabic)|`so`|
+ |Somali (Latin)|`so-latn`|
+ |Songhai|`son`|
+ |South Ndebele|`nr`|
+ |Southern Altai|`alt`|
+ |Southern Sami|`sma`|
+ |Southern Sotho|`st`|
+ |Spanish|`es`|
+ |Sundanese|`su`|
+ |Swahili (Latin)|`sw`|
+ |Swati|`ss`|
+ |Swedish|`sv`|
+ |Tabassaran|`tab`|
+ |Tachelhit|`shi`|
+ |Tahitian|`ty`|
+ |Taita|`dav`|
+ |Tajik (Cyrillic)|`tg`|
+ |Tamil|`ta`|
|Tatar (Cyrillic)|tt-cyrl|
- |Tatar (Latin)|tt|
- |Teso|teo|
- |Tetum|tet|
- |Thai|th|
- |Thangmi|thf|
- |Tok Pisin|tpi|
- |Tongan|to|
- |Tsonga|ts|
- |Tswana|tn|
- |Turkish|tr|
- |Turkmen (Latin)|tk|
- |Tuvan|tyv|
- |Udmurt|udm|
+ |Tatar (Latin)|`tt`|
+ |Teso|`teo`|
+ |Tetum|`tet`|
+ |Thai|`th`|
+ |Thangmi|`thf`|
+ |Tok Pisin|`tpi`|
+ |Tongan|`to`|
+ |Tsonga|`ts`|
+ |Tswana|`tn`|
+ |Turkish|`tr`|
+ |Turkmen (Latin)|`tk`|
+ |Tuvan|`tyv`|
+ |Udmurt|`udm`|
|Uighur (Cyrillic)|ug-cyrl|
- |Ukrainian|uk|
- |Upper Sorbian|hsb|
- |Urdu|ur|
- |Uyghur (Arabic)|ug|
+ |Ukrainian|`uk`|
+ |Upper Sorbian|`hsb`|
+ |Urdu|`ur`|
+ |Uyghur (Arabic)|`ug`|
|Uzbek (Arabic)|uz-arab| |Uzbek (Cyrillic)|uz-cyrl|
- |Uzbek (Latin)|uz|
- |Vietnamese|vi|
- |Volap├╝k|vo|
- |Vunjo|vun|
- |Walser|wae|
- |Welsh|cy|
- |Western Frisian|fy|
- |Wolof|wo|
- |Xhosa|xh|
- |Yakut|sah|
- |Yucatec Maya|yua|
- |Zapotec|zap|
- |Zarma|dje|
- |Zhuang|za|
- |Zulu|zu|
+ |Uzbek (Latin)|`uz`|
+ |Vietnamese|`vi`|
+ |Volap├╝k|`vo`|
+ |Vunjo|`vun`|
+ |Walser|`wae`|
+ |Welsh|`cy`|
+ |Western Frisian|`fy`|
+ |Wolof|`wo`|
+ |Xhosa|`xh`|
+ |Yakut|`sah`|
+ |Yucatec Maya|`yua`|
+ |Zapotec|`zap`|
+ |Zarma|`dje`|
+ |Zhuang|`za`|
+ |Zulu|`zu`|
:::column-end::: :::row-end:::
ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md
Previously updated : 07/18/2023 Last updated : 08/10/2023 monikerRange: '<=doc-intel-3.1.0'
See how data, including customer information, vendor details, and line items, is
::: moniker-end - ## Supported languages and locales >[!NOTE] > Document Intelligence auto-detects language and locale data. + | Supported languages | Details | |:-|:| | &bullet; English (`en`) | United States (`us`), Australia (`au`), Canada (`ca`), United Kingdom (-uk), India (-in)|
See how data, including customer information, vendor details, and line items, is
| Supported Currency Codes | Details | |:-|:|
-| &bullet; ARS | United States (`us`) |
-| &bullet; AUD | Australia (`au`) |
-| &bullet; BRL | United States (`us`) |
+| &bullet; ARS | Argentine Peso (`ar`) |
+| &bullet; AUD | Australian Dollar (`au`) |
+| &bullet; BRL | Brazilian Real (`br`) |
+| &bullet; CAD | Canadian Dollar (`ca`) |
+| &bullet; CLP | Chilean Peso (`cl`) |
+| &bullet; CNY | Chinese Yuan (`cn`) |
+| &bullet; COP | Columbian Peso (`co`) |
+| &bullet; CRC | Costa Rican Cold├│n (`us`) |
+| &bullet; CZK | Czech Koruna (`cz`) |
+| &bullet; DKK | Danish Krone (`dk`) |
+| &bullet; EUR | Euro (`eu`) |
+| &bullet; GBP | British Pound Sterling (`gb`) |
+| &bullet; GGP | Guernsey Pound (`gg`) |
+| &bullet; HUF | Hungarian Forint (`hu`) |
+| &bullet; IDR | Indonesian Rupiah (`id`) |
+| &bullet; INR | Indian Rupee (`in`) |
+| &bullet; ISK | Icelandic Kr├│na (`us`) |
+| &bullet; JPY | Japanese Yen (`jp`) |
+| &bullet; KRW | South Korean Won (`kr`) |
+| &bullet; NOK | Norwegian Krone (`no`) |
+| &bullet; PAB | Panamanian Balboa (`pa`) |
+| &bullet; PEN | Peruvian Sol (`pe`) |
+| &bullet; PLN | Polish Zloty (`pl`) |
+| &bullet; RON | Romanian Leu (`ro`) |
+| &bullet; RSD | Serbian Dinar (`rs`) |
+| &bullet; SEK | Swedish Krona (`se`) |
+| &bullet; TWD | New Taiwan Dollar (`tw`) |
+| &bullet; USD | United States Dollar (`us`) |
+++
+| Supported languages | Details |
+|:-|:|
+| &bullet; English (`en`) | United States (`us`), Australia (`au`), Canada (`ca`), United Kingdom (-uk), India (-in)|
+| &bullet; Spanish (`es`) |Spain (`es`)|
+| &bullet; German (`de`) | Germany (`de`)|
+| &bullet; French (`fr`) | France (`fr`) |
+| &bullet; Italian (`it`) | Italy (`it`)|
+| &bullet; Portuguese (`pt`) | Portugal (`pt`), Brazil (`br`)|
+| &bullet; Dutch (`nl`) | Netherlands (`nl`)|
+
+| Supported Currency Codes | Details |
+|:-|:|
+| &bullet; BRL | Brazilian Real (`br`) |
+| &bullet; GBP | British Pound Sterling (`gb`) |
| &bullet; CAD | Canada (`ca`) |
-| &bullet; CLP | United States (`us`) |
-| &bullet; CNY | United States (`us`) |
-| &bullet; COP | United States (`us`) |
-| &bullet; CRC | United States (`us`) |
-| &bullet; CZK | United States (`us`) |
-| &bullet; DKK | United States (`us`) |
-| &bullet; EUR | United States (`us`) |
-| &bullet; GBP | United Kingdom (`uk`) |
-| &bullet; HUF | United States (`us`) |
-| &bullet; IDR | United States (`us`) |
-| &bullet; INR | United States (`us`) |
-| &bullet; ISK | United States (`us`) |
-| &bullet; JPY | Japan (`jp`) |
-| &bullet; KRW | United States (`us`) |
-| &bullet; NOK | United States (`us`) |
-| &bullet; PAB | United States (`us`) |
-| &bullet; PEN | United States (`us`) |
-| &bullet; PLN | United States (`us`) |
-| &bullet; RON | United States (`us`) |
-| &bullet; RSD | United States (`us`) |
-| &bullet; SEK | United States (`us`) |
-| &bullet; TWD | United States (`us`) |
+| &bullet; EUR | Euro (`eu`) |
+| &bullet; GGP | Guernsey Pound (`gg`) |
+| &bullet; INR | Indian Rupee (`in`) |
| &bullet; USD | United States (`us`) |+ ## Field extraction
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases |Development options | |-|--|--|--|
-|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
+|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases | Development options | |-|--|-|--|
-|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| &#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| &#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases | Development options | |-|--|-|--|
- |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+ |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
ai-services Sdk Overview V3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md
+
+ Title: Document Intelligence (formerly Form Recognizer) SDKs v3.0
+
+description: Document Intelligence v3.0 software development kits (SDKs) expose Document Intelligence models, features and capabilities, using C#, Java, JavaScript, and Python programming language.
++++++ Last updated : 08/15/2023+
+monikerRange: '>=doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Document Intelligence SDK v3.0 (GA)
++
+Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported languages
+
+Document Intelligence SDK supports the following languages and platforms:
+
+| Language → Document Intelligence SDK version | Package| Supported API version| Platform support |
+|:-:|:-|:-| :-|
+| [.NET/C# → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.2.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## Supported Clients
+
+| Language| SDK version | API version | Supported clients|
+| : | :--|:- | :--|
+|.NET/C#</br> Java</br> JavaScript</br>| 4.0.0 (GA)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|.NET/C#</br> Java</br> JavaScript</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|.NET/C#</br> Java</br> JavaScript</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| Python| 3.2.x (GA) | v3.0 / 2022-08-31 (default)| DocumentAnalysisClient</br>DocumentModelAdministrationClient|
+| Python | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+| Python | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+
+## Use Document Intelligence SDK in your applications
+
+The Document Intelligence SDK enables the use and management of the Document Intelligence service in your application. The SDK builds on the underlying Document Intelligence REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Document Intelligence SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 4.0.0
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 4.0.0
+```
+
+### [Java](#tab/java)
+
+```xml
+<dependency>
+<groupId>com.azure</groupId>
+<artifactId>azure-ai-formrecognizer</artifactId>
+<version>4.0.6</version>
+</dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:4.0.6")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@4.0.0
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.2.0
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+```
++++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Document Intelligence API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
++++
+#### Use an Azure Active Directory (Azure AD) token credential
+
+> [!NOTE]
+> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../ai-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import DocumentAnalysisClient
+
+ credential = DefaultAzureCredential()
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
++++
+### 4. Build your application
+
+Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Explore Document Intelligence REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
+
+> [!div class="nextstepaction"]
+> [**Try a Document Intelligence quickstart**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
ai-services Sdk Overview V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md
+
+ Title: Document Intelligence (formerly Form Recognizer) v3.1 SDKs
+
+description: The Document Intelligence v3.1 software development kits (SDKs) expose Document Intelligence models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
++++++ Last updated : 08/11/2023+
+monikerRange: '>=doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Document Intelligence SDK v3.1 (GA)
+
+**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31 ΓÇö v3.1 (GA)**.
+
+Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported languages
+
+Document Intelligence SDK supports the following languages and platforms:
+
+| Language → Document Intelligence SDK version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Package| Supported API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Platform support |
+|:-:|:-|:-| :-:|
+| [**.NET/C# → 4.1.0 → latest GA release </br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[**Java → 4.1.0 → latest GA release</br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[**JavaScript → 5.0.0 → latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[**Python → 3.3.0 → latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## Supported Clients
+
+The following tables present the correlation between each SDK version the supported API versions of the Document Intelligence service.
+
+### [C#/.NET](#tab/csharp)
+
+| Language| SDK version | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+|**.NET/C#**| 4.1.0 (GA)| v3.1 → 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**| 4.0.0 (GA)| v3.0 → 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C#**| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+
+### [Java](#tab/java)
+
+| Language| SDK version | API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+|**Java**| 4.1.0 (GA)| v3.1 → 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 → 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+
+### [JavaScript](#tab/javascript)
+
+| Language| SDK version | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+|**JavaScript**| 5.0.0 (GA)| v3.1 → 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 → 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+
+### [Python](#tab/python)
+
+| Language| SDK version | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+| **Python**| 3.3.0 (GA)| v3.1 → 2023-07-31 (default) | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python**| 3.2.x (GA) | v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+++
+## Use Document Intelligence SDK in your applications
+
+The Document Intelligence SDK enables the use and management of the Document Intelligence service in your application. The SDK builds on the underlying Document Intelligence REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Document Intelligence SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 4.1.0
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 4.1.0
+```
+
+### [Java](#tab/java)
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-formrecognizer</artifactId>
+ <version>4.1.0</version>
+ </dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:4.1.0")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@5.0.0
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.3.0
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Document Intelligence API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
+++
+#### Use an Azure Active Directory (Azure AD) token credential
+
+> [!NOTE]
+> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../ai-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import DocumentAnalysisClient
+
+ credential = DefaultAzureCredential()
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+++
+### 4. Build your application
+
+Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>Explore [**Document Intelligence REST API 2023-07-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) operations.
ai-services Dall E Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/dall-e-quickstart.md
Title: 'Quickstart - Generate an image using Azure OpenAI Service'
+ Title: 'Quickstart: Generate images with Azure OpenAI Service'
-description: Walkthrough on how to get started with Azure OpenAI and make your first image generation call.
+description: Learn how to get started generating images with Azure OpenAI Service by using the Python SDK, the REST APIs, or Azure OpenAI Studio.
Previously updated : 04/04/2023 Last updated : 08/08/2023 zone_pivot_groups: openai-quickstart-dall-e
-# Quickstart: Get started generating images using Azure OpenAI Service
+# Quickstart: Generate images with Azure OpenAI Service
::: zone pivot="programming-language-studio"
ai-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/integrate-synapseml.md
We recommend [creating a Synapse workspace](../../../synapse-analytics/get-start
The next step is to add this code into your Spark cluster. You can either create a notebook in your Spark platform and copy the code into this notebook to run the demo, or download the notebook and import it into Synapse Analytics.
-1. [Download this demo as a notebook](https://github.com/microsoft/SynapseML/blob/master/notebooks/features/cognitive_services/CognitiveServices%20-%20OpenAI.ipynb) (select Raw, then save the file)
1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook) or, if using Databricks, [into the Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook) 1. Install SynapseML on your cluster. See the installation instructions for Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This requires pasting another cell at the top of the notebook you imported 1. Connect your notebook to a cluster and follow along, editing and running the cells below.
ai-services Enable Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/enable-vnet-service-endpoint.md
The following table describes Custom Translator project accessibility per Transl
:::image type="content" source="../media/how-to/allow-network-access.png" alt-text="Screenshot of allowed network access section in the Azure portal.":::
-> [!IMPORTANT]
- > If you configure **Selected Networks and Private Endpoints** via the **Networking** → **Firewalls and virtual networks** tab, you can't use the Custom Translator portal and your Translator resource. However, you can still use the Translator resource outside of the Custom Translator portal.
+ > [!IMPORTANT]
+ > If you configure **Selected Networks and Private Endpoints** via the **Networking** → **Firewalls and virtual networks** tab, you can't use the Custom Translator portal to create workspaces to train and publish models. However, you can still use the Translator resource with [Custom Translator non-interactive REST API](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) to build and publish custom models.
| Translator resource network security setting | Custom Translator portal accessibility | |--|--| | All networks | &bullet; No restrictions |
-| Selected Networks and Private Endpoints | &bullet; Not accessible from allowed VNET IP addresses. </br>&#9679; Use [Custom Translator non-interactive REST API](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) to build and publish custom models. |
-| Disabled | &#9679; Not accessible |
+| Selected Networks and Private Endpoints | &bullet; Not accessible. Use [Custom Translator non-interactive REST API](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) to build and publish custom models. |
+| Disabled | &bullet; Not accessible |
To use Custom Translator without relaxing network access restrictions on your production Translator resource, consider this workaround:
To use Custom Translator without relaxing network access restrictions on your pr
## Billing region codes
-The following table lists the billing region code for each supported billing region:
+Use a billing region code, listed in the following table, with the 'Create a workspace' API for each supported billing region:
+
+##### Create a workspace POST request
+
+ ```bash
+ curl -X POST "https://<resource-name>.cognitiveservices.azure.com/translator/customtranslator/api/texttranslator/v1.0/workspaces" --header "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key:<resource-key>" --data "{'Name': '<workspace-name>', 'Subscription': {'SubscriptionKey': '<resource-key>', 'BillingRegionCode': '<billing-region-code>' }}"
+ ```
+
+##### Supported billing code regions and codes
|Billing Region Name|Billing Region Code| |:-|:-|
aks Aks Planned Maintenance Weekly Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-planned-maintenance-weekly-releases.md
- Title: Use Planned Maintenance for your Azure Kubernetes Service (AKS) cluster weekly releases (preview)-
-description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS) for cluster weekly releases.
- Previously updated : 06/27/2023----
-# Use Planned Maintenance pre-created configurations to schedule Azure Kubernetes Service (AKS) weekly releases (preview)
-
-Planned Maintenance allows you to schedule weekly maintenance windows that ensure the weekly [releases] are controlled. You can select from the set of pre-created configurations and use the Azure CLI to configure your maintenance windows.
-
-You can also be schedule with more fine-grained control using Planned Maintenance's `default` configuration type. For more information, see [Planned Maintenance to schedule and control upgrades][planned-maintenance].
-
-## Before you begin
-
-This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal].
--
-### Limitations
-
-When you use Planned Maintenance, the following restrictions apply:
--- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.-- Currently, performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.-- Updates can't be blocked for more than seven days.-
-## Available pre-created public maintenance configurations
-
-There are two general kinds of pre-created public maintenance configurations:
--- **For weekdays**: (Monday, Tuesday, Wednesday, Thursday), from 10 pm to 6 am the next morning.-- **For weekends**: (Friday, Saturday, Sunday), from 10 pm to 6 am the next morning.-
-The following pre-created public maintenance configurations are available on the weekday and weekend schedules. For weekend schedules, replace `weekday` with `weekend`.
-
-|Configuration name| Time zone|
-|--|--|
-|aks-mrp-cfg-weekday_utc12|UTC+12|
-|...|...|
-|aks-mrp-cfg-weekday_utc1|UTC+1|
-|aks-mrp-cfg-weekday_utc|UTC+0|
-|aks-mrp-cfg-weekday_utc-1|UTC-1|
-|...|...|
-|aks-mrp-cfg-weekday_utc-12|UTC-12|
-
-## Assign a public maintenance configuration to an AKS Cluster
-
-1. Find the public maintenance configuration ID using the [`az maintenance public-configuration show`][az-maintenance-public-configuration-show] command.
-
- ```azurecli-interactive
- az maintenance public-configuration show --resource-name "aks-mrp-cfg-weekday_utc8"
- ```
-
- > [!NOTE]
- > You may be prompted to install the `maintenance` extension.
-
- Your output should look like the following example output. Make sure you take note of the `id` field.
-
- ```json
- {
- "duration": "08:00",
- "expirationDateTime": null,
- "extensionProperties": {
- "maintenanceSubScope": "AKS"
- },
- "id": "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8",
- "installPatches": null,
- "location": "westus2",
- "maintenanceScope": "Resource",
- "name": "aks-mrp-cfg-weekday_utc8",
- "namespace": "Microsoft.Maintenance",
- "recurEvery": "Week Monday,Tuesday,Wednesday,Thursday",
- "startDateTime": "2022-08-01 22:00",
- "systemData": null,
- "tags": {},
- "timeZone": "China Standard Time",
- "type": "Microsoft.Maintenance/publicMaintenanceConfigurations",
- "visibility": "Public"
- }
- ```
-
-2. Assign the public maintenance configuration to your AKS cluster using the [`az maintenance assignment create`][az-maintenance-assignment-create] command and specify the ID from the previous step for the `--maintenance-configuration-id` parameter.
-
- ```azurecli-interactive
- az maintenance assignment create --maintenance-configuration-id "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8" --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters"
- ```
-
-## List all maintenance windows in an existing cluster
--- List all maintenance windows in an existing cluster using the [`az maintenance assignment list`][az-maintenance-assignment-list] command.-
- ```azurecli-interactive
- az maintenance assignment list --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters"
- ```
-
-## Remove a public maintenance configuration from an AKS cluster
--- Remove a public maintenance configuration from a cluster using the [`az maintenance assignment delete`][az-maintenance-assignment-delete] command.-
- ```azurecli-interactive
- az maintenance assignment delete --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters"
- ```
-
-<!-- LINKS - Internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[releases]:release-tracker.md
-[planned-maintenance]: ./planned-maintenance.md
-[az-maintenance-public-configuration-show]: /cli/azure/maintenance/public-configuration#az-maintenance-public-configuration-show
-[az-maintenance-assignment-create]: /cli/azure/maintenance/assignment#az-maintenance-assignment-create
-[az-maintenance-assignment-list]: /cli/azure/maintenance/assignment#az-maintenance-assignment-list
-[az-maintenance-assignment-delete]: /cli/azure/maintenance/assignment#az-maintenance-assignment-delete
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 08/07/2023 Last updated : 08/11/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
You can configure ingress connectivity to the cluster using an ingress controlle
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet, but it has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you don't want to assign VNet IP addresses to pods due to IP shortage, we recommend using Azure CNI Overlay.
-| Area | Azure CNI Overlay | Kubenet |
-|||-|
-| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
+| Area | Azure CNI Overlay | Kubenet |
+||--|-|
+| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
| Network configuration | Simple - no extra configurations required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
-| Pod connectivity performance | Performance on par with VMs in a VNet | Extra hop adds minor latency |
-| Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico |
-| OS platforms supported | Linux and Windows Server 2022(Preview) | Linux only |
+| Pod connectivity performance | Performance on par with VMs in a VNet | Extra hop adds minor latency |
+| Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico |
+| OS platforms supported | Linux and Windows Server 2022, 2019 | Linux only |
## IP address planning - **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnet has enough room to grow for future scaling. Keep in mind that clusters can't scale across subnets, but you can always add new node pools in another subnet within the same VNet for extra space. A `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks. - **Pods**: The Overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion. - When planning IP address space for pods, consider the following factors:
- - Pod CIDR space must not overlap with the cluster subnet range.
- - Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
- The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
+ - Pod CIDR space must not overlap with the cluster subnet range.
+ - Pod CIDR space must not overlap with directly connected networks (like VNet peering, ExpressRoute, or VPN). If external traffic has source IPs in the podCIDR range, it needs translation to a non-overlapping IP via SNAT to communicate with the cluster.
- **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range shouldn't overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks. - **Kubernetes DNS service IP address**: This IP address is within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Azure CNI powered by Cilium currently has the following limitations:
* Hubble is disabled.
-* Not compatible with Istio or other sidecar-based service meshes ([Istio issue #27619](https://github.com/istio/istio/issues/27619)).
+* Network policies cannot use `ipBlock` to allow access to node or pod IPs ([Cilium issue #9209](https://github.com/cilium/cilium/issues/9209) and [#12277](https://github.com/cilium/cilium/issues/12277)).
* Kubernetes services with `internalTrafficPolicy=Local` aren't supported ([Cilium issue #17796](https://github.com/cilium/cilium/issues/17796)).
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
This section describes how to set up Azure NetApp Files for AKS workloads. It's
--resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $SUBNET_NAME \
- --delegations "Microsoft.NetApp/volumes" \
+ --delegations "Microsoft.Netapp/volumes" \
--address-prefixes $ADDRESS_PREFIX ```
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
A container runtime is software that executes containers and manages container i
With a `containerd`-based node and node pools, instead of talking to the `dockershim`, the kubelet talks directly to `containerd` using the CRI (container runtime interface) plugin, removing extra hops in the data flow when compared to the Docker CRI implementation. As such, you see better pod startup latency and less resource (CPU and memory) usage.
-By using `containerd` for AKS nodes, pod startup latency improves and node resource consumption by the container runtime decreases. These improvements through this new architecture enable kubelet communicating directly to `containerd` through the CRI plugin. While in a Moby/docker architecture, kubelet communicates to the `dockershim` and docker engine before reaching `containerd`, therefore having extra hops in the data flow.
+By using `containerd` for AKS nodes, pod startup latency improves and node resource consumption by the container runtime decreases. These improvements through this new architecture enable kubelet communicating directly to `containerd` through the CRI plugin. While in a Moby/docker architecture, kubelet communicates to the `dockershim` and docker engine before reaching `containerd`, therefore having extra hops in the data flow. For more details on the origin of the `dockershim` and its deprecation, see the [Dockershim removal FAQ][kubernetes-dockershim-faq].
![Docker CRI 2](media/cluster-configuration/containerd-cri.png)
az provider register --namespace Microsoft.ContainerService
To create a cluster using node resource group lockdown, set the `--nrg-lockdown-restriction-level` to **ReadOnly**. This configuration allows you to view the resources, but not modify them. ```azurecli-interactive
-az aks create -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level ReadOnly
+az aks create -n aksTest -g aksTest --nrg-lockdown-restriction-level ReadOnly
``` ### Update an existing cluster with node resource group lockdown ```azurecli-interactive
-az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level ReadOnly
+az aks update -n aksTest -g aksTest --nrg-lockdown-restriction-level ReadOnly
``` ### Remove node resource group lockdown from a cluster ```azurecli-interactive
-az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level Unrestricted
+az aks update -n aksTest -g aksTest --nrg-lockdown-restriction-level Unrestricted
```
az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level Unrestric
[azurerm-azurelinux]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster_node_pool#os_sku [general-usage]: https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/#general-usage [client-config-options]: https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md#client-configuration-options
+[kubernetes-dockershim-faq]: https://kubernetes.io/blog/2022/02/17/dockershim-faq/#why-was-the-dockershim-removed-from-kubernetes
<!-- LINKS - internal --> [azure-cli-install]: /cli/azure/install-azure-cli
az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level Unrestric
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-add-np-containerd]: /create-node-pools.md#add-a-windows-server-node-pool-with-containerd
+[aks-add-np-containerd]: create-node-pools.md#add-a-windows-server-node-pool-with-containerd
[az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update [baseline-reference-architecture-aks]: /azure/architecture/reference-architectures/containers/aks/baseline-aks
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
The AKS Linux Extension is an Azure VM extension that installs and configures mo
- [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.-- [Local-gadget](https://inspektor-gadget.io/docs/v0.16.0): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod.
+- [Local-gadget](https://inspektor-gadget.io/docs/v0.18.1): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod.
These tools help provide observability around many node health related problems, such as:
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
This article shows you how to upgrade AKS cluster node images and how to update
> [!NOTE] > The AKS cluster must use virtual machine scale sets for the nodes.
+>
+> It's not possible to downgrade a node image version (for example *AKSUbuntu-2204 to AKSUbuntu-1804*, or *AKSUbuntu-2204-202308.01.0 to AKSUbuntu-2204-202307.27.0*).
## Check for available node image upgrades
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Your AKS cluster has regular maintenance performed on it automatically. By defau
There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`: -- `default` corresponds to a basic configuration that will update your control plane and your kube-system pods on a Virtual Machine Scale Sets instance. It's a legacy configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].
+- `default` corresponds to a basic configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].
- `aksManagedAutoUpgradeSchedule` controls when cluster upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default` configuration. For more information on cluster auto-upgrade, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
aks Use Cvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md
Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS)
description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS) Previously updated : 05/08/2023 Last updated : 08/14/2023 # Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster
Adding a node pool with CVM to your AKS cluster is currently in preview.
Before you begin, make sure you have the following: -- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- [Azure CLI installed](/cli/azure/install-azure-cli).-- An existing AKS cluster in the *westus*, *eastus*, *westeurope*, or *northeurope* region.
+- An existing AKS cluster.
- The [DCasv5 and DCadsv5-series][cvm-subs-dc] or [ECasv5 and ECadsv5-series][cvm-subs-ec] SKUs available for your subscription. ## Limitations
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
az keyvault secret set --vault-name MyAKSGMSAVault --name "GMSADomainUserCred" -
Your domain controller needs to be configured through DNS so it's reachable by the AKS cluster. You can configure your network and DNS outside of your AKS cluster to allow your cluster to access the domain controller. Alternatively, you can configure a custom VNET with a custom DNS using Azure CNI with your AKS cluster to provide access to your domain controller. For more information, see [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][aks-cni].
+## Optional: Configure more than one DNS server
+
+If you want to configure more than one DNS server for Windows GMSA in your AKS cluster, don't specify `--gmsa-dns-server`or `v--gmsa-root-domain-name`. Instead, you can add multiple DNS servers in the vnet by selecting Custom DNS and adding the DNS servers
+ ## Optional: Use your own kubelet identity for your cluster To provide the AKS cluster access to your key vault, the cluster kubelet identity needs access to your key vault. By default, when you create a cluster with managed identity enabled, a kubelet identity is automatically created. You can grant access to your key vault for this identity after cluster creation, which is done in a later step.
aks Windows Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-partner-solutions.md
Our 3rd party partners featured below have published introduction guides to star
| DevOps | [GitLab](#gitlab) <br> [CircleCI](#circleci) | | Networking | [NGINX](#f5-nginx) <br> [Calico](#calico) | | Observability | [Datadog](#datadog) <br> [New Relic](#new-relic) |
-| Security | [Prisma](#prisma) |
+| Security | [Prisma Cloud](#prisma-cloud) |
| Storage | [NetApp](#netapp) | | Config Management | [Chef](#chef) | ## DevOps -
+
DevOps streamlines the delivery process, improves collaboration across teams, and enhances software quality, ensuring swift, reliable, and continuous deployment of your Windows-based applications. ### GitLab
+![Logo of GitLab.](./media/windows-aks-partner-solutions/gitlab.png)
+ The GitLab DevSecOps Platform supports the Microsoft development ecosystem with performance, accessibility testing, SAST, DAST and Fuzzing security scanning, dependency scanning, SBOM, license management and more. As an extensible platform, GitLab also allows you to plug in your own tooling for any stage. GitLab's integration with Azure Kubernetes Services (AKS) enables full DevSecOps workflows for Windows and Linux Container workloads using either Push CD or GitOps Pull CD with flux manifests. Using Cloud Native Buildpaks, GitLab Auto DevOps can build, test and autodeploy OSS .NET projects.
To learn more, please our see our [joint blog](https://techcommunity.microsoft.c
### CircleCI
+![Logo of Circle CI.](./media/windows-aks-partner-solutions/circleci.png)
+ CircleCIΓÇÖs integration with Azure Kubernetes Services (AKS) allows you to automate, build, validate, and ship containerized Windows applications, ensuring faster and more reliable software deployment. You can easily integrate your pipeline with AKS using CircleCI orbs, which are prepacked snippets of YAML configuration. Follow this [tutorial](https://techcommunity.microsoft.com/t5/containers/continuous-deployment-of-windows-containers-with-circleci-and/ba-p/3841220) to learn how to set up a CI/CD pipeline to build a Dockerized ASP.NET application and deploy it to an AKS cluster.
Ensure efficient traffic management, enhanced security, and optimal network perf
### F5 NGINX
+![Logo of F5 NGINX.](./media/windows-aks-partner-solutions/f5.png)
+ NGINX Ingress Controller deployed in AKS, on-premises, and in the cloud implements unified Kubernetes-native API gateways, load balancers, and Ingress controllers to reduce complexity, increase uptime, and provide in-depth insights into app health and performance for containerized Windows workloads. Running at the edge of a Kubernetes cluster, NGINX Ingress Controller ensures holistic app security with user and service identities, authorization, access control, encrypted communications, and additional NGINX App Protect modules for Layer 7 WAF and DoS app protection.
Learn how to manage connectivity to your Windows applications running on Windows
### Calico
+![Logo of Tigera Calico.](./media/windows-aks-partner-solutions/tigera.png)
+ Tigera provides an active security platform with full-stack observability for containerized workloads and Microsoft AKS as a fully managed SaaS (Calico Cloud) or a self-managed service (Calico Enterprise). The platform prevents, detects, troubleshoots, and automatically mitigates exposure risks of security breaches for workloads in Microsoft AKS. Its open-source offering, Calico Open Source, is the most widely adopted container networking and security solution. It specifies security and observability as code to ensure consistent enforcement of security policies, which enables DevOps, platform, and security teams to protect workloads, detect threats, achieve continuous compliance, and troubleshoot service issues in real-time.
Observability provides deep insights into your systems, enabling rapid issue det
### Datadog
+![Logo of Datadog.](./media/windows-aks-partner-solutions/datadog.png)
+ Datadog is the essential monitoring and security platform for cloud applications. We bring together end-to-end traces, metrics, and logs to make your applications, infrastructure, and third-party services entirely observable. Partner with Datadog for Windows on AKS environments to streamline monitoring, proactively resolve issues, and optimize application performance and availability. Get started by following the recommendations in our [joint blog](https://techcommunity.microsoft.com/t5/containers/gain-full-observability-into-windows-containers-on-azure/ba-p/3853603). ### New Relic
+![Logo of New Relic.](./media/windows-aks-partner-solutions/newrelic.png)
+ New Relic's Azure Kubernetes integration is a powerful solution that seamlessly connects New Relic's monitoring and observability capabilities with Azure Kubernetes Service (AKS). By deploying the New Relic Kubernetes integration, users gain deep insights into their AKS clusters' performance, health, and resource utilization. This integration allows users to efficiently manage and troubleshoot containerized applications, optimize resource allocation, and proactively identify and resolve issues in their AKS environments. With New Relic's comprehensive monitoring and analysis tools, businesses can ensure the smooth operation and optimal performance of their Kubernetes workloads on Azure. Check this [blog](https://techcommunity.microsoft.com/t5/containers/persistent-storage-for-windows-containers-on-azure-kubernetes/ba-p/3836781) for detailed information.
Check this [blog](https://techcommunity.microsoft.com/t5/containers/persistent-s
Ensure the integrity and confidentiality of applications, thereby fostering trust and compliance across your infrastructure.
-### Prisma
+### Prisma Cloud
+
+![Logo of Palo Alto Network's Prisma Cloud.](./media/windows-aks-partner-solutions/prismacloud.png)
Prisma Cloud is a comprehensive Cloud-Native Application Protection Platform (CNAPP) tailor-made to help secure Windows containers on Azure Kubernetes Service (AKS). Gain continuous, real-time visibility and control over Windows container environments including vulnerability and compliance management, identities and permissions, and AI-assisted runtime defense. Integrated container scanning across the pipeline and in Azure Container Registry ensure security throughout the entire application lifecycle.
Storage enables standardized and seamless storage interactions, ensuring high ap
### NetApp
+![Logo of NetApp.](./media/windows-aks-partner-solutions/netapp.png)
+ Astra Control provides application data management for stateful workloads on Azure Kubernetes Service (AKS). Discover your apps and define protection policies that automatically back up workloads offsite. Protect, clone, and move applications across Kubernetes environments with ease. Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/containers/persistent-storage-for-windows-containers-on-azure-kubernetes/ba-p/3836781) post to dynamically provision SMB volumes for Windows AKS workloads.
Automate and standardize the system settings across your environments to enhance
### Chef
+![Logo of Chef.](./media/windows-aks-partner-solutions/progress.png)
+ Chef provides visibility and threat detection from build to runtime that monitors, audits, and remediates the security of your Azure cloud services and Kubernetes and Windows container assets. Chef provides comprehensive visibility and continuous compliance into your cloud security posture and helps limit the risk of misconfigurations in cloud-native environments by providing best practices based on CIS, STIG, SOC2, PCI-DSS and other benchmarks. This is part of a broader compliance offering that supports on-premises or hybrid cloud environments including applications deployed on the edge. To learn more about ChefΓÇÖs capabilities, check out the comprehensive ΓÇÿhow-toΓÇÖ blog post here: [Securing Your Windows Environments Running on Azure Kubernetes Service with Chef](https://techcommunity.microsoft.com/t5/containers/securing-your-windows-environments-running-on-azure-kubernetes/ba-p/3821830).
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
At this time, [client source IP preservation][client-source-ip] is not supported
Yes. For the implications of making a change and the options that are available, see [Maximum number of pods][maximum-number-of-pods].
+## What is the default TCP timeout in Windows OS?
+
+The default TCP timeout in Windows OS is 4 minutes. This value isn't configurable. When an application uses a longer timeout, the TCP connections between different containers in the same node close after four minutes.
+ ## Why am I seeing an error when I try to create a new Windows agent pool? If you created your cluster before February 2020 and have never done any cluster upgrade operations, the cluster still uses an old Windows image. You may have seen an error that resembles:
To fix this error:
1. Move Windows pods from existing Windows agent pools to new Windows agent pools. 1. Delete old Windows agent pools.
+## Why am I seeing an error when I try to deploy Windows pods?
+
+If you specify a value in `--max-pods` less than the number of pods you want to create, you may see the `No available addresses` error.
+
+To fix this error, use the `az aks nodepool add` command with a high enough `--max-pods` value:
+
+```azurecli
+az aks nodepool add \
+ --cluster-name $CLUSTER_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --name $NODEPOOL_NAME \
+ --max-pods 3
+```
+For more details, see the [`--max-pods` documentation](https://learn.microsoft.com/cli/azure/aks/nodepool?view=azure-cli-latest#az-aks-nodepool-add:~:text=for%20system%20nodepool.-,%2D%2Dmax%2Dpods%20%2Dm,-The%20maximum%20number).
+ ## Why is there an unexpected user named "sshd" on my VM node? AKS adds a user named "sshd" when installing the OpenSSH service. This user is not malicious. We recommend that customers update their alerts to ignore this unexpected user account.
Yes, you can. However, Azure Monitor is in public preview for gathering logs (st
## Are there any limitations on the number of services on a cluster with Windows nodes?
-A cluster with Windows nodes can have approximately 500 services before it encounters port exhaustion.
+A cluster with Windows nodes can have approximately 500 services (sometimes less) before it encounters port exhaustion. This limitation applies to a Kubernetes Service with External Traffic Policy set to ΓÇ£ClusterΓÇ¥.
+
+When external traffic policy on a Service is configured as Cluster, the traffic undergoes an additional Source NAT on the node which also results in reservation of a port from the TCPIP dynamic port pool. This port pool is a limited resource (~16K ports by default) and many active connections to a Service(s) can lead to dynamic port pool exhaustion resulting in connection drops.
+
+If the Kubernetes Service is configured with External Traffic Policy set to ΓÇ£LocalΓÇ¥, port exhaustion problems aren't likely to occur at 500 services.
## Can I use Azure Hybrid Benefit with Windows nodes?
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
For either scenario, you need to have the federated trust set up before you upda
### Migrate from latest version
-If your cluster is already using the latest version of the Azure Identity SDK, perform the following steps to complete the authentication configuration:
+If your application is already using the latest version of the Azure Identity SDK, perform the following steps to complete the authentication configuration:
- Deploy workload identity in parallel with pod-managed identity. You can restart your application deployment to begin using the workload identity, where it injects the OIDC annotations into the application automatically. - After verifying the application is able to authenticate successfully, you can [remove the pod-managed identity](#remove-pod-managed-identity) annotations from your application and then remove the pod-managed identity add-on. ### Migrate from older version
-If your cluster isn't using the latest version of the Azure Identity SDK, you have two options:
+If your application isn't using the latest version of the Azure Identity SDK, you have two options:
- You can use a migration sidecar that we provide within your Linux applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to:
analysis-services Analysis Services Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage.md
To get all the latest features, and the smoothest experience when connecting to
## External open source tools
-**Tabular Editor** - An open-source tool for creating, maintaining, and managing tabular models using an intuitive, lightweight editor. A hierarchical view shows all objects in your tabular model. Objects are organized by display folders with support for multi-select property editing and DAX syntax highlighting. XMLA read-only is required for query operations. Read-write is required for metadata operations. To learn more, see [tabulareditor.github.io](https://tabulareditor.github.io/).
- **ALM Toolkit** - An open-source schema compare tool for Analysis Services tabular models and Power BI datasets, most often used for application lifecycle management (ALM) scenarios. Perform deployment across environments and retain incremental refresh historical data. Diff and merge metadata files, branches and repos. Reuse common definitions between datasets. Read-only is required for query operations. Read-write is required for metadata operations. To learn more, see [alm-toolkit.com](http://alm-toolkit.com/). **DAX Studio** – An open-source tool for DAX authoring, diagnosis, performance tuning, and analysis. Features include object browsing, integrated tracing, query execution breakdowns with detailed statistics, DAX syntax highlighting and formatting. XMLA read-only is required for query operations. To learn more, see [daxstudio.org](https://daxstudio.org/).
When connecting using SSMS, if you run into problems, you may need to clear the
## Next steps If you haven't already deployed a tabular model to your new server, now is a good time. To learn more, see [Deploy to Azure Analysis Services](analysis-services-deploy.md).
-If you've deployed a model to your server, you're ready to connect to it using a client application or tool. To learn more, see [Get data from Azure Analysis Services server](analysis-services-connect.md).
+If you've deployed a model to your server, you're ready to connect to it using a client application or tool. To learn more, see [Get data from Azure Analysis Services server](analysis-services-connect.md).
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Azure Analysis Services Firewall blocks all client connections other than those
### Authentication
-User authentication is handled by [Azure Active Directory (AAD)](../active-directory/fundamentals/active-directory-whatis.md). When logging in, users use an organization account identity with role-based access to the database. User identities must be members of the default Azure Active Directory for the subscription that the server is in. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md).
+User authentication is handled by [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). When logging in, users use an organization account identity with role-based access to the database. User identities must be members of the default Azure Active Directory for the subscription that the server is in. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md).
### Data security
Manage your servers and model databases by using [SQL Server Management Studio (
### Open-source tools
-Analysis Services has a vibrant community of developers who create tools. Be sure to check out [Tabular Editor](https://tabulareditor.github.io/), an open-source tool for creating, maintaining, and managing tabular models using an intuitive, lightweight editor. [DAX Studio](https://daxstudio.org/), is a great open-source tool for DAX authoring, diagnosis, performance tuning, and analysis.
+Analysis Services has a vibrant community of developers who create tools. [DAX Studio](https://daxstudio.org/), is a great open-source tool for DAX authoring, diagnosis, performance tuning, and analysis.
### PowerShell
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
Last updated 12/01/2022 + # API Management policy reference This section provides links to reference articles for all API Management policies.
More information about policies:
- [Find and replace string in body](find-and-replace-policy.md) - Finds a request or response substring and replaces it with a different substring. - [Mask URLs in content](redirect-content-urls-policy.md) - Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. - [Set backend service](set-backend-service-policy.md) - Changes the backend service for an incoming request.-- [Set body](set-body-policy.md) - Sets the message body for incoming and outgoing requests.
+- [Set body](set-body-policy.md) - Sets the message body for a request or response.
- [Set HTTP header](set-header-policy.md) - Assigns a value to an existing response and/or request header or adds a new response and/or request header. - [Set query string parameter](set-query-parameter-policy.md) - Adds, replaces value of, or deletes request query string parameter. - [Rewrite URL](rewrite-uri-policy.md) - Converts a request URL from its public form to the form expected by the web service.
For more information about working with policies, see:
+ [Tutorial: Transform and protect your API](transform-api.md) + [Set or edit policies](set-edit-policies.md) + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) ++
api-management Api Management Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-versions.md
Last updated 10/31/2021 + # Versions in Azure API Management Versions allow you to present groups of related APIs to your developers. You can use versions to handle breaking changes in your API safely. Clients can choose to use your new API version when they're ready, while existing clients continue to use an older version. Versions are differentiated through a version identifier (which is any string value you choose), and a versioning scheme allows clients to identify which version of an API they want to use.
The format of an API request URL when using query string-based versioning is: `h
For example, `https://apis.contoso.com/products?api-version=v1` and `https://apis.contoso.com/products?api-version=v2` could refer to the same `products` API but to versions `v1` and `v2` respectively.
+> [!NOTE]
+> Query parameters aren't allowed in the `servers` propery of an OpenAPI specification. If you export an OpenAPI specification from an API version, a query string won't appear in the server URL.
+ ## Original versions If you add a version to a non-versioned API, an `Original` version will be automatically created and will respond on the default URL, without a version identifier specified. The `Original` version ensures that any existing callers are not broken by the process of adding a version. If you create a new API with versions enabled at the start, an `Original` version isn't created.
A version set is automatically deleted when the final version is deleted.
You can view and manage version sets directly by using [Azure CLI](/cli/azure/apim/api/versionset), [Azure PowerShell](/powershell/module/az.apimanagement/#api-management), [Resource Manager templates](/azure/templates/microsoft.apimanagement/service/apiversionsets), or the [Azure Resource Manager API](/rest/api/apimanagement/current-ga/api-version-set).
+> [!NOTE]
+> All versions in a version set have the same versioning scheme, based on the versioning scheme used when you first add a version to an API.
### Migrating a non-versioned API to a versioned API When you use the Azure portal to enable versioning on an existing API, the following changes are made to your API Management resources:
The details of an API also show a list of all of the versions of that API. An `O
> [!TIP] > API versions need to be added to a product before they will be visible on the developer portal.++
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
# Set body
-Use the `set-body` policy to set the message body for incoming and outgoing requests. To access the message body you can use the `context.Request.Body` property or the `context.Response.Body`, depending on whether the policy is in the inbound or outbound section.
+Use the `set-body` policy to set the message body for a request or response. To access the message body you can use the `context.Request.Body` property or the `context.Response.Body`, depending on whether the policy is in the inbound or outbound section.
> [!IMPORTANT] > By default when you access the message body using `context.Request.Body` or `context.Response.Body`, the original message body is lost and must be set by returning the body back in the expression. To preserve the body content, set the `preserveContent` parameter to `true` when accessing the message. If `preserveContent` is set to `true` and a different body is returned by the expression, the returned body is used.
OriginalUrl.
</pre> + ## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend
The following Liquid filters are supported in the `set-body` policy. For filter
> [!NOTE] > The policy requires Pascal casing for Liquid filter names (for example, "AtLeast" instead of "at_least"). > + * Abs * Append * AtLeast
The following Liquid filters are supported in the `set-body` policy. For filter
* UrlDecode * UrlEncode - ## Examples ### Literal text
The following example uses the `AsFormUrlEncodedContent()` expression to access
* [API Management transformation policies](api-management-transformation-policies.md) [!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]+
app-service Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md
Previously updated : 07/25/2023 Last updated : 08/10/2023 # Authentication scenarios and recommendations
The following table lists authentication scenarios and the authentication soluti
| Even if you can use a code solution, would you rather *not* use libraries? Don't want the maintenance burden? | ✅ | ❌ | ❌ | | Does your web app need to provide incremental consent? | ❌ | ✅ | ✅ | | Do you need conditional access in your web app? | ❌ | ❌ | ✅ |
-| Your app need to handle the access token expiring without making the user sign in again (use a refresh token)? | ❌ | ✅ | ✅ |
+| Your app need to handle the access token expiring without making the user sign in again (use a refresh token)? | ✅ | ✅ | ✅ |
| Need custom authorization logic or info about the signed-in user? | ❌ | ✅ | ✅ | | Need to sign in users from external or social identity providers? | ✅ | ✅ | ✅ | | You have an ASP.NET Core app? | ✅ | ❌ | ✅ |
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
This article uses Health check in the Azure portal to monitor App Service instan
![Health check failure][1]
-Please note that _/api/health_ is just an example added for illustration purposes. We do not create a Health Check path by default. You should make sure that the path you are selecting is a valid path that exists within your application
+Note that _/api/health_ is just an example added for illustration purposes. We do not create a Health Check path by default. You should make sure that the path you are selecting is a valid path that exists within your application
## What App Service does with Health checks - When given a path on your app, Health check pings this path on all instances of your App Service app at 1-minute intervals. - If an instance doesn't respond with a status code between 200-299 (inclusive) after 10 requests, App Service determines it's unhealthy and removes it from the load balancer for this Web App. The required number of failed requests for an instance to be deemed unhealthy is configurable to a minimum of two requests.-- After removal, Health check continues to ping the unhealthy instance. If the instance begins to respond with a healthy status code (200-299) then the instance is returned to the load balancer.-- If an instance remains unhealthy for one hour, it will be replaced with a new instance.
+- After removal, Health check continues to ping the unhealthy instance. If the instance begins to respond with a healthy status code (200-299), then the instance is returned to the load balancer.
+- If an instance remains unhealthy for one hour, it's replaced with a new instance.
- When scaling out, App Service pings the Health check path to ensure new instances are ready. > [!NOTE]
Please note that _/api/health_ is just an example added for illustration purpose
> - Your [App Service plan](./overview-hosting-plans.md) should be scaled to two or more instances to fully utilize Health check. > - The Health check path should check critical components of your application. For example, if your application depends on a database and a messaging system, the Health check endpoint should connect to those components. If the application can't connect to a critical component, then the path should return a 500-level response code to indicate the app is unhealthy. Also, if the path does not return a response within 1 minute, the health check ping is considered unhealthy. > - When selecting the Health check path, make sure you're selecting a path that returns a 200 status code, only when the app is fully warmed up.
+> - In order to use Health check on your Function App, you must use a [premium or dedicated hosting plan](../azure-functions/functions-scale.md#overview-of-plans).
> [!CAUTION] > Health check configuration changes restart your app. To minimize impact to production apps, we recommend [configuring staging slots](deploy-staging-slots.md) and swapping to production.
In addition to configuring the Health check options, you can also configure the
| App setting name | Allowed values | Description | |-|-|-|
-|`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to `2`, your instances will be removed after `2` failed pings. (Default value is `10`) |
-|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `1` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). |
+|`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to `2`, your instances are removed after `2` failed pings. (Default value is `10`) |
+|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two are excluded. The other two instances (one healthy and one unhealthy) continue to receive requests. In the worst-case scenario where all instances are unhealthy, none are excluded. <br /> To override this behavior, set app setting to a value between `1` and `100`. A higher value means more unhealthy instances are removed (default value is `50`). |
#### Authentication and security
-Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No additional settings are required if these security features are enabled.
+Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No other settings are required if these security features are enabled.
If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. Once you have those features in-place, you can authenticate the health check request by inspecting the header, `x-ms-auth-internal-token`, and validating that it matches the SHA256 hash of the environment variable `WEBSITE_AUTH_ENCRYPTION_KEY`. If they match, then the health check request is valid and originating from App Service.
function envVarMatchesHeader(headerValue) {
> The `x-ms-auth-internal-token` header is only available on Windows App Service. ## Instances
-Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab will show your instance's name, the status of that instance and give you the option to manually restart the application instance.
+Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab shows your instance's name, the status of that instance and gives you the option to manually restart the application instance.
-If the status of your instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they will be listed on the opening blade from the restart button.
+If the status of your instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they are listed on the opening blade from the restart button.
If you restart the instance and the restart process fails, you will then be given the option to replace the worker (only 1 instance can be replaced per hour). This will also affect any applications using the same App Service Plan. Windows applications will also have the option to view processes via the Process Explorer. This gives you further insight on the instance's processes including thread count, private memory, and total CPU time. ## Diagnostic information collection
-For Windows applications, you have the option to collect diagnostic information in the Health Check tab. Enabling diagnostic collection will add an auto-heal rule that creates memory dumps for unhealthy instances and saves it to a designated storage account. Enabling this option will change auto-heal configurations. If there are existing auto-heal rules, we recommend setting this up through App Service diagnostics.
+For Windows applications, you have the option to collect diagnostic information in the Health Check tab. Enabling diagnostic collection adds an auto-heal rule that creates memory dumps for unhealthy instances and saves it to a designated storage account. Enabling this option changes auto-heal configurations. If there are existing auto-heal rules, we recommend setting this up through App Service diagnostics.
-Once diagnostic collection is enabled, you can create or choose an existing storage account for your files. You can only select storage accounts in the same region as your application. Keep in mind that saving will restart your application. After saving, if your site instances are found to be unhealthy after continuous pings, you can go to your storage account resource and view the memory dumps.
+Once diagnostic collection is enabled, you can create or choose an existing storage account for your files. You can only select storage accounts in the same region as your application. Keep in mind that saving restarts your application. After saving, if your site instances are found to be unhealthy after continuous pings, you can go to your storage account resource and view the memory dumps.
## Monitoring
-After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, select the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and option to create a new alert rule. Health check metrics will aggregate the successful pings & display failures only when the instance was deemed unhealthy based on the health check configuration. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
+After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, select the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and option to create a new alert rule. Health check metrics aggregate the successful pings & display failures only when the instance was deemed unhealthy based on the health check configuration. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
## Limitations - Health check can be enabled for **Free** and **Shared** App Service Plans so you can have metrics on the site's health and setup alerts, but because **Free** and **Shared** sites can't scale out, any unhealthy instances won't be replaced. You should scale up to the **Basic** tier or higher so you can scale out to 2 or more instances and utilize the full benefit of Health check. This is recommended for production-facing applications as it will increase your app's availability and performance. - The App Service plan can have a maximum of one unhealthy instance replaced per hour and, at most, three instances per day.-- There's a non-configurable limit on the total amount of instances replaced by Health Check per scale unit. If this limit is reached, no unhealthy instances will be replaced. This value gets reset every 12 hours.
+- There's a non-configurable limit on the total number of instances replaced by Health Check per scale unit. If this limit is reached, no unhealthy instances are replaced. This value gets reset every 12 hours.
## Frequently Asked Questions
The Health check requests are sent to your site internally, so the request won't
### Are the Health check requests sent over HTTP or HTTPS?
-On Windows App Service, the Health check requests will be sent via HTTPS when [HTTPS Only](configure-ssl-bindings.md#enforce-https) is enabled on the site. Otherwise, they're sent over HTTP. On Linux App Service, the health check requests are only sent over HTTP and can't be sent over HTTP**S** at this time.
+On Windows App Service, the Health check requests are sent via HTTPS when [HTTPS Only](configure-ssl-bindings.md#enforce-https) is enabled on the site. Otherwise, they're sent over HTTP. On Linux App Service, the health check requests are only sent over HTTP and can't be sent over HTTP**S** at this time.
### Is Health check following the application code configured redirects between the default domain and the custom domain?
Unhealthy instances will always be removed from the load balancer rotation regar
#### Example
-Imagine you have two applications (or one app with a slot) with Health check enabled, called App A and App B. They are on the same App Service Plan and that the Plan is scaled out to four instances. If App A becomes unhealthy on two instances, the load balancer will stop sending requests to App A on those two instances. Requests will still be routed to App B on those instances assuming App B is healthy. If App A remains unhealthy for over an hour on those two instances, those instances will only be replaced if App B is **also** unhealthy on those instances. If App B is healthy, the instance won't be replaced.
+Imagine you have two applications (or one app with a slot) with Health check enabled, called App A and App B. They are on the same App Service Plan and that the Plan is scaled out to four instances. If App A becomes unhealthy on two instances, the load balancer stops sending requests to App A on those two instances. Requests are still routed to App B on those instances assuming App B is healthy. If App A remains unhealthy for over an hour on those two instances, those instances are only replaced if App B is **also** unhealthy on those instances. If App B is healthy, the instance isn't replaced.
![Visual diagram explaining the example scenario above.][2]
application-gateway How To Ssl Offloading Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md
Previously updated : 07/24/2023 Last updated : 08/09/2023
status:
Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the command below to get the FQDN. ```bash
-fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'')
+fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
``` Curling this FQDN should return responses from the backend as configured on the HTTPRoute.
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
This article primarily helps with the configuration migration. Client traffic mi
* An existing Application Gateway V1 Standard. * Make sure you have the latest PowerShell modules, or you can use Azure Cloud Shell in the portal. * If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+* Ensure that there is no existing Application gateway with the provided Appgw V2 Name and Resource group name in V1 subscription. This will rewrite the existing resources.
+* If Public IP is provided ensure that its in succeeded state.If not provided and AppGwResourceGroupName is provided ensure that public IP resource with name AppGwV2Name-IP doesnΓÇÖt exist in a resourcegroup with the name AppGwResourceGroupName in the V1 subscription.
+* Ensure that no other operation is planned on the V1 gateway or any of its associated resources during migration.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)] [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] > [!IMPORTANT]
->Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.
+>FRun the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.This is not a mandatory step for version 1.0.11 & above of the migration script.
+
+> [!IMPORTANT]
+>A new stable version of the migration script , version 1.0.11 is available now , which contains important bug fixes and updates.Use this version to avoid potential issues.
+ ## Configuration migration An Azure PowerShell script is provided in this document. It performs the following operations to help you with the configuration:
An Azure PowerShell script is provided in this document. It performs the followi
## Downloading the script
-You can download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration).
+You can download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration).A new stable release (Version 1.0.11) of the migration script is available ,which includes major updates and bug fixes .It is recommended to use this stable version.
+ ## Using the script > [!NOTE] > Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.
+> This is not a mandatory step for version 1.0.11 & above of the migration script.
There are two options for you depending on your local PowerShell environment setup and preferences:
Run the script with the following command to get the latest version:
This command also installs the required Az modules. #### Install using the script directly- If you have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download).
+Version 1.0.11 is the new version of the migration script which includes major bug fixes.It is recommended to use this stable version.
+
+#### How to check the version of the downloaded script
+To check the version of the downloaded script the steps are as follows:
+* Extract the contents of the NuGet package.
+* Open the .PS1 file in the folder and check the .VERSION on top to confirm the version of the downloaded script
+```
+<#PSScriptInfo
+.VERSION 1.0.10
+.GUID be3b84b4-e9c5-46fb-a050-699c68e16119
+.AUTHOR Microsoft Corporation
+.COMPANYNAME Microsoft Corporation
+.COPYRIGHT Microsoft Corporation. All rights reserved.
+```
+* Make sure to use the latest stable version from [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration)
+ #### How to run the script To run the script:
To run the script:
-publicIpResourceId <public IP name string> -validateMigration -enableAutoScale ```
+ > [!NOTE]
+> During migration don't attempt any other operation on the V1 gateway or any of its associated resources.
Parameters for the script: * **resourceId: [String]: Required**: This parameter is the Azure Resource ID for your existing Standard V1 or WAF V1 gateway. To find this string value, navigate to the Azure portal, select your application gateway or WAF resource, and click the **Properties** link for the gateway. The Resource ID is located on that page.
To run the script:
``` * **subnetAddressRange: [String]: Required**: This parameter is the IP address space that you've allocated (or want to allocate) for a new subnet that contains your new V2 gateway. The address space must be specified in the CIDR notation. For example: 10.0.0.0/24. You don't need to create this subnet in advance but the CIDR needs to be part of the VNET address space. The script creates it for you if it doesn't exist and if it exists, it uses the existing one (make sure the subnet is either empty, contains only V2 Gateway if any, and has enough available IPs).
- * **appgwName: [String]: Optional**. This is a string you specify to use as the name for the new Standard_V2 or WAF_V2 gateway. If this parameter isn't supplied, the name of your existing V1 gateway is used with the suffix *_V2* appended.
+ * **appgwName: [String]: Optional**. This is a string you specify to use as the name for the new Standard_V2 or WAF_V2 gateway. If this parameter isn't supplied, the name of your existing V1 gateway is used with the suffix *_V2* appended.
* **AppGwResourceGroupName: [String]: Optional**. Name of resource group where you want V2 Application Gateway resources to be created (default value is `<V1-app-gw-rgname>`)
+ > [!NOTE]
+> Ensure that there is no existing Application gateway with the provided Appgw V2 Name and Resource group name in V1 subscription. This will rewrite the existing resources.
* **sslCertificates: [PSApplicationGatewaySslCertificate]: Optional**. A comma-separated list of PSApplicationGatewaySslCertificate objects that you create to represent the TLS/SSL certs from your V1 gateway must be uploaded to the new V2 gateway. For each of your TLS/SSL certs configured for your Standard V1 or WAF V1 gateway, you can create a new PSApplicationGatewaySslCertificate object via the `New-AzApplicationGatewaySslCertificate` command shown here. You need the path to your TLS/SSL Cert file and the password. This parameter is only optional if you don't have HTTPS listeners configured for your V1 gateway or WAF. If you have at least one HTTPS listener setup, you must specify this parameter.
To run the script:
To create a list of PSApplicationGatewayTrustedRootCertificate objects, see [New-AzApplicationGatewayTrustedRootCertificate](/powershell/module/Az.Network/New-AzApplicationGatewayTrustedRootCertificate). * **privateIpAddress: [String]: Optional**. A specific private IP address that you want to associate to your new V2 gateway. This must be from the same VNet that you allocate for your new V2 gateway. If this isn't specified, the script allocates a private IP address for your V2 gateway.
- * **publicIpResourceId: [String]: Optional**. The resourceId of existing public IP address (standard SKU) resource in your subscription that you want to allocate to the new V2 gateway. If this isn't specified, the script allocates a new public IP in the same resource group. The name is the V2 gateway's name with *-IP* appended.
+ * **publicIpResourceId: [String]: Optional**. The resourceId of existing public IP address (standard SKU) resource in your subscription that you want to allocate to the new V2 gateway.If public Ip resource name is provided, ensure that it exists in succeeded state.
+ If this isn't specified, the script allocates a new public IP in the same resource group. The name is the V2 gateway's name with *-IP* appended.If AppGwResourceGroupName is provided and public IP is not provided ensure that public IP resource with name AppGwV2Name-IP doesnΓÇÖt exist in a resourcegroup with the name AppGwResourceGroupName in the V1 subscription
+ * **validateMigration: [switch]: Optional**. Use this parameter if you want the script to do some basic configuration comparison validations after the V2 gateway creation and the configuration copy. By default, no validation is done. * **enableAutoScale: [switch]: Optional**. Use this parameter if you want the script to enable autoscaling on the new V2 gateway after it's created. By default, autoscaling is disabled. You can always manually enable it later on the newly created V2 gateway.
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
HTTP headers allow a client and server to pass additional information with a req
Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend pools.
-To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-url-portal.md).
+To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-http-headers-portal.md).
![img](./media/rewrite-http-headers-url/header-rewrite-overview.png)
azure-app-configuration Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript.md
In this quickstart, you will use Azure App Configuration to centralize storage a
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). - An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).-- [LTS versions of Node.js](https://nodejs.org/en/about/releases/). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
## Add a key-value
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
### Kublr
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version|
|--|--|--|--|--|
+|[Kublr 1.26.0](https://docs.kublr.com/releasenotes/1.26/release-1.26.0/)|1.26.4, 1.25.6, 1.24.13, 1.23.17, 1.22.17|1.21.0_2023-07-11|16.0.5100.7242|14.5 (Ubuntu 20.04)|
|Kublr 1.21.2 | 1.22.10 | 1.9.0_2022-07-12 | 16.0.312.4243 |12.3 (Ubuntu 12.3-1) | ### Lenovo
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version|
|--|--|--|--|--| |Lenovo ThinkAgile MX1020 |1.24.6| 1.14.0_2022-12-13 |16.0.816.19223|Not validated|
-|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2| 1.10.0_2022-08-09 |16.0.312.4243| 12.3 (Ubuntu 12.3-1)|
-
+|Lenovo ThinkAgile MX3520 |1.22.6| 1.10.0_2022-08-09 |16.0.312.4243| 12.3 (Ubuntu 12.3-1)|
### Nutanix
More tests will be added in future releases of Azure Arc-enabled data services.
- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - [Create a data controller - indirectly connected with the CLI](create-data-controller-indirect-cli.md) - To create a directly connected data controller, start with [Prerequisites to deploy the data controller in direct connectivity mode](create-data-controller-direct-prerequisites.md).+
azure-arc Conceptual Inner Loop Gitops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-inner-loop-gitops.md
Title: "Inner Loop Developer Experience for Teams Adopting GitOps" Previously updated : 06/18/2021 Last updated : 08/09/2023
This article describes how an established inner loop can enhance developer produ
## Inner dev loop frameworks
-Building and deploying containers can slow the inner dev experience and impact team productivity. Cloud-native development teams will benefit from a robust inner dev loop framework. Inner dev loop frameworks assist in the iterative process of writing code, building, and debugging.
+Building and deploying containers can slow the inner dev experience and impact team productivity. Cloud-native development teams benefit from a robust inner dev loop framework. Inner dev loop frameworks help with the iterative process of writing code, building, and debugging.
-Inner dev loop frameworks capabilities include:
+Capabilities of inner dev loop frameworks include:
-
-- Automate repetitive steps like building code, containers, and deploying to target cluster. -- Easily working with remote and local clusters, and supporting local tunnel debugging for hybrid setup.
+- Automation of repetitive steps such as building code and deploying to target cluster.
+- Enhanced ability to work with remote and local clusters, and supporting local tunnel debugging for hybrid setup.
- Ability to configure custom flow for team-based productivity.-- Allow handling of microservice dependencies. -- Hot reloading, port forwarding, log, and terminal access.
+- Handling microservice dependencies.
+- Hot reloading, port forwarding, log, and terminal access.
+Depending on the maturity and complexity of the service, dev teams can choose their cluster setup to accelerate the inner dev loop:
+- All local
+- All remote
+- Hybrid
-Depending on the maturity and complexity of the service, dev teams determine which cluster setup they will use to accelerate the inner dev loop:
-
-* Completely local
-* Completely remote
-* Hybrid
--
-Luckily, there are many frameworks out there that support the listed capabilities. Microsoft offers Bridge to Kubernetes for local tunnel debugging and there are similar market offerings like DevSpace, Scaffold, and Tilt, among others.
+Many frameworks support these capabilities. Microsoft offers [Bridge to Kubernetes](/visualstudio/bridge/overview-bridge-to-kubernetes) for [local tunnel debugging](/visualstudio/bridge/bridge-to-kubernetes-vs-code#install-and-use-local-tunnel-debugging). Many other similar market offerings are available, such as DevSpace, Scaffold, and Tilt.
> [!NOTE]
-> DonΓÇÖt confuse the market offering [DevSpace](https://github.com/loft-sh/devspace) with MicrosoftΓÇÖs previously named DevSpace, which is now called [Bridge to Kubernetes](https://code.visualstudio.com/docs/containers/bridge-to-kubernetes).
+> The market offering [DevSpace](https://github.com/loft-sh/devspace) shouldn't be confused with MicrosoftΓÇÖs offering, [Bridge to Kubernetes](/visualstudio/bridge/overview-bridge-to-kubernetes), which was previously named DevSpace.
+## Inner loop to outer loop transition
-## Inner loop to outer loop transition
+Once you've evaluated and chosen an inner loop dev framework, you can build a seamless inner loop to outer loop transition.
-Once you've evaluated and chosen an inner loop dev framework, build seamless inner loop to outer loop transition.
+As described in the example scenario covered in [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md), an application developer works on application code within an application repository. This application repository also holds high-level deployment Helm and/or Kustomize templates.
-As described in the [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md) article's example, an application developer works on application code within an application repository. This application repository also holds high-level deployment Helm and/or Kustomize templates. CI\CD pipelines:
+The CI/CD pipelines:
-* Generate the low-level manifests from the high-level templates, adding environment-specific values
-* Create a pull request that merges the low-level manifests with the GitOps repo that holds desired state for the specific environment.
+- Generate the low-level manifests from the high-level templates, adding environment-specific values.
+- Create a pull request that merges the low-level manifests with the GitOps repo that holds desired state for the specific environment.
-Similar low-level manifests can be generated locally for the inner dev loop, using the configuration values local to the developer. Application developers can iterate on the code changes and use the low-level manifests to deploy and debug applications. Generation of the low-level manifests can be integrated into an inner loop workflow, using the developerΓÇÖs local configuration. Most of the inner loop framework allows configuring custom flows by either extending through custom plugins or injecting script invocation based on hooks.
+Similar low-level manifests can be generated locally for the inner dev loop, using the configuration values local to the developer. Application developers can iterate on the code changes and use the low-level manifests to deploy and debug applications. Generation of the low-level manifests can be integrated into an inner loop workflow, using the developerΓÇÖs local configuration. Most of the inner loop framework allows configuring custom flows by either extending through custom plugins or injecting script invocation based on hooks.
## Example inner loop workflow built with DevSpace framework
+To illustrate the inner loop workflow, we can look at an example scenario. This example uses the DevSpace framework, but the general workflow can be used with other frameworks.
-### Diagram A: Inner Loop Flow
+This diagram shows the workflow for the inner loop.
-### Diagram B: Inner Loop to Outer Loop transition
+This diagram shows the workflow for the inner loop to outer loop transition.
-## Example workflow
-As an application developer, Alice:
-- Authors a devspace.yaml to configure the inner loop.+
+In this example, as an application developer, Alice:
+
+- Authors a devspace.yaml file to configure the inner loop.
- Writes and tests application code using the inner loop for efficiency. - Deploys to staging or prod with outer loop. - Suppose Alice wants to update, run, and debug the application either in local or remote cluster. 1. Alice updates the local configuration for the development environment represented in .env file. 1. Alice runs `devspace use context` and selects the Kubernetes cluster context.
-1. Alice selects a namespace to work with by running `devspace use namespace <namespace_name>`.
-1. Alice can iterates changes to the application code, and deploys and debugs the application onto the target cluster by running `devspace dev`.
-1. Running `devspace dev` generates low-level manifests based on AliceΓÇÖs local configuration and deploys the application. These low-level manifests are configured with devspace hooks in devspace.yaml
-1. Alice doesn't need to rebuild the container every time she makes code changes, since DevSpace will enable hot reloading, using file sync to copy her latest changes inside the container.
-1. Running `devspace dev` will also deploy any dependencies configured in devspace.yaml, such as back-end dependencies to front-end.
+1. Alice selects a namespace to work with by running `devspace use namespace <namespace_name>`.
+1. Alice can iterate changes to the application code, and deploys and debugs the application onto the target cluster by running `devspace dev`.
+1. Running `devspace dev` generates low-level manifests based on AliceΓÇÖs local configuration and deploys the application. These low-level manifests are configured with DevSpace hooks in devspace.yaml.
+1. Alice doesn't need to rebuild the container every time she makes code changes, since DevSpace enables hot reloading, using file sync to copy her latest changes inside the container.
+1. Running `devspace dev` also deploys any dependencies configured in devspace.yaml, such as back-end dependencies to front-end.
1. Alice tests her changes by accessing the application through the forwarding configured through devspace.yaml. 1. Once Alice finalizes her changes, she can purge the deployment by running `devspace purge` and create a new pull request to merge her changes to the dev branch of the application repository. > [!NOTE]
-> Find the sample code for above workflow at this [GitHub repo](https://github.com/Azure/arc-cicd-demo-src)
+> Find the sample code for this workflow in our [GitHub repo](https://github.com/Azure/arc-cicd-demo-src).
## Next steps
-Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-gitops-flux2.md)
+- Learn about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-gitops-flux2.md).
+- Learn more about [CI/CD workflow using GitOps](conceptual-gitops-ci-cd.md).
azure-arc Monitor Gitops Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md
Title: Monitor GitOps (Flux v2) status and activity Previously updated : 07/28/2023 Last updated : 08/11/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. # Monitor GitOps (Flux v2) status and activity
-We provide dashboards to help you monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2 in your Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. These JSON dashboards can be imported to Grafana to help you view and analyze your data in real time.
+We provide dashboards to help you monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2 in your Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. These JSON dashboards can be imported to Grafana to help you view and analyze your data in real time. You can also set up alerts for this information.
## Prerequisites
The **Flux Configuration Compliance Status** table lists all Flux configurations
:::image type="content" source="media/monitor-gitops-flux2/flux-configuration-compliance.png" alt-text="Screenshot showing the Flux Configuration Compliance Status table in the Application Deployments dashboard." lightbox="media/monitor-gitops-flux2/flux-configuration-compliance.png":::
-The **Count of Flux Extension Deployments by Status** chart shows the count of clusters, based on their provisioning state.
+The **Count of Flux Extension Deployments by Status** chart shows the count of clusters, based on their provisioning state.
:::image type="content" source="media/monitor-gitops-flux2/flux-deployments-by-status.png" alt-text="Screenshot of the Flux Extension Deployments by Status pie chart in the Application Deployments dashboard.":::
The **Count of Flux Configurations by Compliance Status** chart shows the count
:::image type="content" source="media/monitor-gitops-flux2/flux-configurations-by-status.png" alt-text="Screenshot of the Flux Configuration by Compliance Status chart on the Application Deployments dashboard.":::
+### Filter dashboard data to track application deployments
+
+You can filter data in the **GitOps Flux - Application Deployments Dashboard** to change the information shown. For example, you can show data for only certain subscriptions or resource groups, or limit data to a particular cluster. To do so, select the filter option either from the top level dropdowns or from any column header in the tables.
+
+For example, in the **Flux Configuration Compliance Status** table, you can select a specific commit from the **SourceLastSyncCommit** column. By doing so, you can track the status of a configuration deployment to all of the clusters affected by that commit.
+
+### Create alerts for extension and configuration failures
+
+After you've imported the dashboard as described in the previous section, you can set up alerts. These alerts notify you when Flux extensions or Flux configurations experience failures.
+
+Follow the steps below to create an alert. Example queries are provided to detect extension provisioning or extension upgrade failures, or to detect compliance state failures.
+
+1. In the left navigation menu of the dashboard, select **Alerting**.
+1. Select **Alert rules**.
+1. Select **+ Create alert rule**. The new alert rule page opens, with the **Grafana managed alerts** option selected by default.
+1. In **Rule name**, add a descriptive name. This name is displayed in the alert rule list, and it will be the used as the `alertname` label for every alert instance created from this rule.
+1. Under **Set a query and alert condition**:
+
+ - Select a data source. The same data source used for the dashboard may be used here.
+ - For **Service**, select **Azure Resource Graph**.
+ - Select the subscriptions from the dropdown list.
+ - Enter the query you want to use. For example, for extension provisioning or upgrade failures, you can enter this query:
+
+ ```kusto
+ kubernetesconfigurationresources
+ | where type == "microsoft.kubernetesconfiguration/extensions"
+ | extend provisioningState = tostring(properties.ProvisioningState)
+ | where provisioningState == "Failed"
+ | summarize count() by provisioningState
+ ```
+
+ Or for compliance state failures, you can enter this query:
+
+ ```kusto
+ kubernetesconfigurationresources
+ | where type == "microsoft.kubernetesconfiguration/fluxconfigurations"
+ | extend complianceState=tostring(properties.complianceState)
+ | where complianceState == "Non-Compliant"
+ | summarize count() by complianceState
+ ```
+
+ - For **Threshold box**, select **A** for input type and set the threshold to **0** to receive alerts even if just one extension fails on the cluster. Mark this as the **Alert condition**.
+
+ :::image type="content" source="media/monitor-gitops-flux2/application-dashboard-set-alerts.png" alt-text="Screenshot showing the alert creation process." lightbox="media/monitor-gitops-flux2/application-dashboard-set-alerts.png":::
+
+1. Specify the alert evaluation interval:
+
+ - For **Condition**, select the query or expression to trigger the alert rule.
+ - For **Evaluate every**, enter the evaluation frequency as a multiple of 10 seconds.
+ - For **Evaluate for**, specify how long the condition must be true before the alert is created.
+ - In **Configure no data and error handling**, indicate what should happen when the alert rule returns no data or returns an error.
+ - To check the results from running the query, select **Preview**.
+
+1. Add the storage location, rule group, and any additional metadata that you want to associate with the rule.
+
+ - For **Folder**, select the folder where the rule should be stored.
+ - For **Group**, specify a predefined group.
+ - If desired, add a description and summary to customize alert messages.
+ - Add Runbook URL, panel, dashboard, and alert IDs as needed.
+
+1. If desired, add any custom labels. Then select **Save**.
+
+You can also [configure contact points](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/manage-contact-points/) and [configure notification policies](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-notification-policy/) for your alerts.
+ ## Monitor resource consumption and reconciliations Follow these steps to import dashboards that let you monitor Flux resource consumption, reconciliations, API requests, and reconciler status.
Follow these steps to import dashboards that let you monitor Flux resource consu
1. [Link the Managed Prometheus workspace to the Managed Grafana instance](/azure/azure-monitor/essentials/azure-monitor-workspace-manage#link-a-grafana-workspace). This takes a few minutes to complete. 1. Follow the steps to [import these JSON dashboards to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard).
-After you have imported the dashboards, they'll display information from the clusters that you're monitoring.
+After you have imported the dashboards, they'll display information from the clusters that you're monitoring. To show information only for a particular cluster or namespace, use the filters near the top of each dashboard.
The **Flux Control Plane** dashboard shows details about status resource consumption, reconciliations at the cluster level, and Kubernetes API requests.
The **Flux Cluster Stats** dashboard shows details about the number of reconcile
:::image type="content" source="media/monitor-gitops-flux2/flux-cluster-stats-dashboard.png" alt-text="Screenshot of the Flux Cluster Stats dashboard." lightbox="media/monitor-gitops-flux2/flux-cluster-stats-dashboard.png":::
-## Filter dashboard data to track Application Deployments
+### Create alerts for resource consumption and reconciliation issues
+
+After you've imported the dashboard as described in the previous section, you can set up alerts. These alerts notify you of resource consumption and reconciliation issues that may require attention.
+
+To enable these alerts, you deploy a Bicep template similar to the one shown here. The alert rules in this template are samples that can be modified as needed.
+
+Once you've downloaded the Bicep template and made your changes, [follow these steps to deploy the template](/azure/azure-resource-manager/bicep/template-specs).
+
+```bicep
+param azureMonitorWorkspaceName string
+param alertReceiverEmailAddress string
+
+param kustomizationLookbackPeriodInMinutes int = 5
+param helmReleaseLookbackPeriodInMinutes int = 5
+param gitRepositoryLookbackPeriodInMinutes int = 5
+param bucketLookbackPeriodInMinutes int = 5
+param helmRepoLookbackPeriodInMinutes int = 5
+param timeToResolveAlerts string = 'PT10M'
+param location string = resourceGroup().location
+
+resource azureMonitorWorkspace 'Microsoft.Monitor/accounts@2023-04-03' = {
+ name: azureMonitorWorkspaceName
+ location: location
+}
+
+resource fluxRuleActionGroup 'Microsoft.Insights/actionGroups@2023-01-01' = {
+ name: 'fluxRuleActionGroup'
+ location: 'global'
+ properties: {
+ enabled: true
+ groupShortName: 'fluxGroup'
+ emailReceivers: [
+ {
+ name: 'emailReceiver'
+ emailAddress: alertReceiverEmailAddress
+ }
+ ]
+ }
+}
+
+resource fluxRuleGroup 'Microsoft.AlertsManagement/prometheusRuleGroups@2023-03-01' = {
+ name: 'fluxRuleGroup'
+ location: location
+ properties: {
+ description: 'Flux Prometheus Rule Group'
+ scopes: [
+ azureMonitorWorkspace.id
+ ]
+ enabled: true
+ interval: 'PT1M'
+ rules: [
+ {
+ alert: 'KustomizationNotReady'
+ expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="Kustomization"}) > 0'
+ for: 'PT${kustomizationLookbackPeriodInMinutes}M'
+ labels: {
+ description: 'Kustomization reconciliation failing for last ${kustomizationLookbackPeriodInMinutes} minutes.'
+ }
+ annotations: {
+ description: 'Kustomization reconciliation failing for last ${kustomizationLookbackPeriodInMinutes} minutes.'
+ }
+ enabled: true
+ severity: 3
+ resolveConfiguration: {
+ autoResolved: true
+ timeToResolve: timeToResolveAlerts
+ }
+ actions: [
+ {
+ actionGroupId: fluxRuleActionGroup.id
+ }
+ ]
+ }
+ {
+ alert: 'HelmReleaseNotReady'
+ expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="HelmRelease"}) > 0'
+ for: 'PT${helmReleaseLookbackPeriodInMinutes}M'
+ labels: {
+ description: 'HelmRelease reconciliation failing for last ${helmReleaseLookbackPeriodInMinutes} minutes.'
+ }
+ annotations: {
+ description: 'HelmRelease reconciliation failing for last ${helmReleaseLookbackPeriodInMinutes} minutes.'
+ }
+ enabled: true
+ severity: 3
+ resolveConfiguration: {
+ autoResolved: true
+ timeToResolve: timeToResolveAlerts
+ }
+ actions: [
+ {
+ actionGroupId: fluxRuleActionGroup.id
+ }
+ ]
+ }
+ {
+ alert: 'GitRepositoryNotReady'
+ expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="GitRepository"}) > 0'
+ for: 'PT${gitRepositoryLookbackPeriodInMinutes}M'
+ labels: {
+ description: 'GitRepository reconciliation failing for last ${gitRepositoryLookbackPeriodInMinutes} minutes.'
+ }
+ annotations: {
+ description: 'GitRepository reconciliation failing for last ${gitRepositoryLookbackPeriodInMinutes} minutes.'
+ }
+ enabled: true
+ severity: 3
+ resolveConfiguration: {
+ autoResolved: true
+ timeToResolve: timeToResolveAlerts
+ }
+ actions: [
+ {
+ actionGroupId: fluxRuleActionGroup.id
+ }
+ ]
+ }
+ {
+ alert: 'BucketNotReady'
+ expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="Bucket"}) > 0'
+ for: 'PT${bucketLookbackPeriodInMinutes}M'
+ labels: {
+ description: 'Bucket reconciliation failing for last ${bucketLookbackPeriodInMinutes} minutes.'
+ }
+ annotations: {
+ description: 'Bucket reconciliation failing for last ${bucketLookbackPeriodInMinutes} minutes.'
+ }
+ enabled: true
+ severity: 3
+ resolveConfiguration: {
+ autoResolved: true
+ timeToResolve: timeToResolveAlerts
+ }
+ actions: [
+ {
+ actionGroupId: fluxRuleActionGroup.id
+ }
+ ]
+ }
+ {
+ alert: 'HelmRepositoryNotReady'
+ expression: 'sum by (cluster, namespace, name) (gotk_reconcile_condition{type="Ready", status="False", kind="HelmRepository"}) > 0'
+ for: 'PT${helmRepoLookbackPeriodInMinutes}M'
+ labels: {
+ description: 'HelmRepository reconciliation failing for last ${helmRepoLookbackPeriodInMinutes} minutes.'
+ }
+ annotations: {
+ description: 'HelmRepository reconciliation failing for last ${helmRepoLookbackPeriodInMinutes} minutes.'
+ }
+ enabled: true
+ severity: 3
+ resolveConfiguration: {
+ autoResolved: true
+ timeToResolve: timeToResolveAlerts
+ }
+ actions: [
+ {
+ actionGroupId: fluxRuleActionGroup.id
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
-You can filter data in the **GitOps Flux - Application Deployments Dashboard** to change the information shown. For example, you can show data for only certain subscriptions or resource groups, or limit data to a particular cluster. To do so, select the filter option either from the top level dropdowns or from any column header in the tables.
-
-For example, in the **Flux Configuration Compliance Status** table, you can select a specific commit from the **SourceLastSyncCommit** column. By doing so, you can track the status of a configuration deployment to all of the clusters affected by that commit.
## Next steps
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 07/07/2022 Last updated : 08/09/2023
This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md) sample queries for Azure Arc-enabled Kubernetes. For a complete list of Azure Resource Graph samples, see
-[Resource Graph samples by Category](../../governance/resource-graph/samples/samples-by-category.md)
-and [Resource Graph samples by Table](../../governance/resource-graph/samples/samples-by-table.md).
+[Resource Graph samples by category](../../governance/resource-graph/samples/samples-by-category.md)
+and [Resource Graph samples by table](../../governance/resource-graph/samples/samples-by-table.md).
## Sample queries
and [Resource Graph samples by Table](../../governance/resource-graph/samples/sa
- Learn more about the [query language](../../governance/resource-graph/concepts/query-language.md). - Learn more about how to [explore resources](../../governance/resource-graph/concepts/explore-resources.md).-- See samples of [Starter language queries](../../governance/resource-graph/samples/starter.md).-- See samples of [Advanced language queries](../../governance/resource-graph/samples/advanced.md).
+- See samples of [starter language queries](../../governance/resource-graph/samples/starter.md).
+- See samples of [advanced language queries](../../governance/resource-graph/samples/advanced.md).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6, [4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) |
-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br> TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br> TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5+vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
+| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br>TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5+vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 |
-| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 |
+| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution |[Kublr 1.26.0](https://docs.kublr.com/releasenotes/1.26/release-1.26.0/); Upstream K8s Versions: 1.21.3, 1.22.10, 1.22.17, 1.23.17, 1.24.13, 1.25.6, 1.26.4 |
| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
-| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.12; Upstream K8s version: 1.24.4 <br>Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
+| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) |Wind River Cloud Platform 22.12; Upstream K8s version: 1.24.4 <br>Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
The conformance tests run as part of the Azure Arc-enabled Kubernetes validation
* [Learn how to connect an existing Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md) * Learn about the [Azure Arc agents](conceptual-agent-overview.md) deployed on Kubernetes clusters when connecting them to Azure Arc.+
azure-arc Vmware Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/vmware-faq.md
Title: Azure Arc-enabled servers VMware Frequently Asked Questions description: Learn how to use Azure Arc-enabled servers on virtual machines running in VMware vSphere environments. Previously updated : 01/20/2023 Last updated : 08/10/2023
Yes. Azure Arc-enabled servers work with VMs running in an on-premises VMware vS
Azure Arc-enabled servers and/or Azure Arc-enabled VMware vSphere work with [all supported versions](./prerequisites.md) of Windows Server and major distributions of Linux. As mentioned, even though Arc-enabled servers work with VMware vSphere virtual machines, the [Connected Machine agent](agent-overview.md) has no notion of familiarity with the underlying infrastructure fabric and virtualization layer.
-## Should I use Arc-enabled servers or Arc-enabled VMware vSphere, and can I use both?
+## Should I use Arc-enabled servers or Arc-enabled VMware vSphere for my VMware VMs?
-While Azure Arc-enabled servers and Azure Arc-enabled VMware vSphere can be used in conjunction with one another, please note that this will produce dual representations in the Azure portal of the same underlying virtual machine. This scenario can potentially introduce a ΓÇ£duplicateΓÇ¥ guest management experience and is not advisable.
+Each option has its own unique benefits and can be combined as needed. Arc-enabled servers allows you to manage the guest OS of your VMs with the Azure Connected Machine agent. Arc-enabled VMware vSphere enables you to onboard your VMware environment at-scale to Azure Arc with automatic discovery, in addition to performing full VM lifecycle and virtual hardware operations. You have the flexibility to start with either option and incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience.
azure-cache-for-redis Cache How To Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-functions.md
- Title: Using Azure Functions
-description: Learn how to use Azure Functions
-
-zone_pivot_groups: cache-redis-zone-pivot-group
----- Previously updated : 05/24/2023--
-# Serverless event-based architectures with Azure Cache for Redis and Azure Functions (preview)
-
-This article describes how to use Azure Cache for Redis with [Azure Functions](/azure/azure-functions/functions-overview) to create optimized serverless and event-driven architectures.
-Azure Cache for Redis can be used as a [trigger](/azure/azure-functions/functions-triggers-bindings) for Azure Functions, allowing Redis to initiate a serverless workflow.
-This functionality can be highly useful in data architectures like a [write-behind cache](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-caching/#types-of-caching), or any [event-based architectures](/azure/architecture/guide/architecture-styles/event-driven).
-
-There are three triggers supported in Azure Cache for Redis:
--- `RedisPubSubTrigger` triggers on [Redis pubsub messages](https://redis.io/docs/manual/pubsub/)-- `RedisListTrigger` triggers on [Redis lists](https://redis.io/docs/data-types/lists/)-- `RedisStreamTrigger` triggers on [Redis streams](https://redis.io/docs/data-types/streams/)-
-[Keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/) can also be used as triggers through `RedisPubSubTrigger`.
-
-## Scope of availability for functions triggers
-
-|Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
-||::|::|::|
-|Pub/Sub | Yes | Yes | Yes |
-|Lists | Yes | Yes | Yes |
-|Streams | Yes | Yes | Yes |
-
-> [!IMPORTANT]
-> Redis triggers are not currently supported on consumption functions.
->
-
-## Triggering on keyspace notifications
-
-Redis offers a built-in concept called [keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/). When enabled, this feature publishes notifications of a wide range of cache actions to a dedicated pub/sub channel. Supported actions include actions that affect specific keys, called _keyspace notifications_, and specific commands, called _keyevent notifications_. A huge range of Redis actions are supported, such as `SET`, `DEL`, and `EXPIRE`. The full list can be found in the [keyspace notification documentation](https://redis.io/docs/manual/keyspace-notifications/).
-
-The `keyspace` and `keyevent` notifications are published with the following syntax:
-
-```
-PUBLISH __keyspace@0__:<affectedKey> <command>
-PUBLISH __keyevent@0__:<affectedCommand> <key>
-```
-
-Because these events are published on pub/sub channels, the `RedisPubSubTrigger` is able to pick them up. See the [RedisPubSubTrigger](#redispubsubtrigger) section for more examples.
-
-> [!IMPORTANT]
-> In Azure Cache for Redis, `keyspace` events must be enabled before notifications are published. For more information, see [Advanced Settings](cache-configure.md#keyspace-notifications-advanced-settings).
-
-## Prerequisites and limitations
--- The `RedisPubSubTrigger` isn't capable of listening to [keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/) on clustered caches.-- Basic tier functions don't support triggering on `keyspace` or `keyevent` notifications through the `RedisPubSubTrigger`.-- The `RedisPubSubTrigger` isn't supported with consumption functions.-
-## Trigger usage
-
-### RedisPubSubTrigger
-
-The `RedisPubSubTrigger` subscribes to a specific channel pattern using [`PSUBSCRIBE`](https://redis.io/commands/psubscribe/), and surfaces messages received on those channels to the function.
-
-> [!WARNING]
-> This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.
->
-
-> [!NOTE]
-> Functions with the `RedisPubSubTrigger` should not be scaled out to multiple instances.
-> Each instance listens and processes each pubsub message, resulting in duplicate processing.
-
-#### Inputs for RedisPubSubTrigger
--- `ConnectionString`: connection string to the redis cache (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`).-- `Channel`: name of the pubsub channel that the trigger should listen to.-
-This sample listens to the channel "channel" at a localhost Redis instance at `127.0.0.1:6379`
--
-```csharp
-[FunctionName(nameof(PubSubTrigger))]
-public static void PubSubTrigger(
- [RedisPubSubTrigger(ConnectionString = "127.0.0.1:6379", Channel = "channel")] RedisMessageModel model,
- ILogger logger)
-{
- logger.LogInformation(JsonSerializer.Serialize(model));
-}
-```
--
-```java
-@FunctionName("PubSubTrigger")
- public void PubSubTrigger(
- @RedisPubSubTrigger(
- name = "message",
- connectionStringSetting = "redisLocalhost",
- channel = "channel")
- String message,
- final ExecutionContext context) {
- context.getLogger().info(message);
- }
-```
---
-```json
-{
- "bindings": [
- {
- "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisLocalhost",
- "channel": "channel",
- "name": "message",
- "direction": "in"
- }
- ],
- "scriptFile": "__init__.py"
-}
-```
--
-This sample listens to any keyspace notifications for the key `myKey` in a localhost Redis instance at `127.0.0.1:6379`.
--
-```csharp
-
-[FunctionName(nameof(PubSubTrigger))]
-public static void PubSubTrigger(
- [RedisPubSubTrigger(ConnectionString = "127.0.0.1:6379", Channel = "__keyspace@0__:myKey")] RedisMessageModel model,
- ILogger logger)
-{
- logger.LogInformation(JsonSerializer.Serialize(model));
-}
-```
--
-```java
-@FunctionName("KeyspaceTrigger")
- public void KeyspaceTrigger(
- @RedisPubSubTrigger(
- name = "message",
- connectionStringSetting = "redisLocalhost",
- channel = "__keyspace@0__:myKey")
- String message,
- final ExecutionContext context) {
- context.getLogger().info(message);
- }
-```
---
-```json
-{
- "bindings": [
- {
- "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisLocalhost",
- "channel": "__keyspace@0__:myKey",
- "name": "message",
- "direction": "in"
- }
- ],
- "scriptFile": "__init__.py"
-}
-```
--
-This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/) in a localhost Redis instance at `127.0.0.1:6379`.
--
-```csharp
-[FunctionName(nameof(PubSubTrigger))]
-public static void PubSubTrigger(
- [RedisPubSubTrigger(ConnectionString = "127.0.0.1:6379", Channel = "__keyevent@0__:del")] RedisMessageModel model,
- ILogger logger)
-{
- logger.LogInformation(JsonSerializer.Serialize(model));
-}
-```
--
-```java
- @FunctionName("KeyeventTrigger")
- public void KeyeventTrigger(
- @RedisPubSubTrigger(
- name = "message",
- connectionStringSetting = "redisLocalhost",
- channel = "__keyevent@0__:del")
- String message,
- final ExecutionContext context) {
- context.getLogger().info(message);
- }
-```
---
-```json
-{
- "bindings": [
- {
- "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisLocalhost",
- "channel": "__keyevent@0__:del",
- "name": "message",
- "direction": "in"
- }
- ],
- "scriptFile": "__init__.py"
-}
-```
--
-### RedisListTrigger
-
-The `RedisListTrigger` pops elements from a list and surfaces those elements to the function. The trigger polls Redis at a configurable fixed interval, and uses [`LPOP`](https://redis.io/commands/lpop/)/[`RPOP`](https://redis.io/commands/rpop/)/[`LMPOP`](https://redis.io/commands/lmpop/) to pop elements from the lists.
-
-#### Inputs for RedisListTrigger
--- `ConnectionStringSetting`: connection string to the redis cache, for example`<cacheName>.redis.cache.windows.net:6380,password=...`.-- `Key`: Key or keys to read from, space-delimited.
- - Multiple keys only supported on Redis 7.0+ using [`LMPOP`](https://redis.io/commands/lmpop/).
- - Listens to only the first key given in the argument using [`LPOP`](https://redis.io/commands/lpop/)/[`RPOP`](https://redis.io/commands/rpop/) on Redis versions less than 7.0.
- - This field can be resolved using `INameResolver`
-- (optional) `PollingIntervalInMs`: How often to poll Redis in milliseconds.
- - Default: 1000
-- (optional) `MessagesPerWorker`: How many messages each functions worker "should" process. Used to determine how many workers the function should scale to.
- - Default: 100
-- (optional) `Count`: Number of elements to pull from Redis at one time. These are processed in parallel.
- - Default: 10
- - Only supported on Redis 6.2+ using the `COUNT` argument in [`LPOP`](https://redis.io/commands/lpop/)/[`RPOP`](https://redis.io/commands/rpop/).
-- (optional) `ListPopFromBeginning`: determines whether to pop elements from the beginning using [`LPOP`](https://redis.io/commands/lpop/) or to pop elements from the end using [`RPOP`](https://redis.io/commands/rpop/).
- - Default: true
-
-The following sample polls the key `listTest` at a localhost Redis instance at `127.0.0.1:6379`:
--
-```csharp
-[FunctionName(nameof(ListTrigger))]
-public static void ListTrigger(
- [RedisListTrigger(ConnectionStringSetting = "127.0.0.1:6379", Key = "listTest")] RedisMessageModel model,
- ILogger logger)
-{
- logger.LogInformation(JsonSerializer.Serialize(model));
-}
-```
--
-```java
-@FunctionName("ListTrigger")
- public void ListTrigger(
- @RedisListTrigger(
- name = "entry",
- connectionStringSetting = "redisLocalhost",
- key = "listTest",
- pollingIntervalInMs = 100,
- messagesPerWorker = 10,
- count = 1,
- listPopFromBeginning = false)
- String entry,
- final ExecutionContext context) {
- context.getLogger().info(entry);
- }
-```
---
-```json
-{
- "bindings": [
- {
- "type": "redisListTrigger",
- "listPopFromBeginning": true,
- "connectionStringSetting": "redisLocalhost",
- "key": "listTest",
- "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
- "name": "entry",
- "direction": "in"
- }
- ],
- "scriptFile": "__init__.py"
-}
-```
--
-### RedisStreamTrigger
-
-The `RedisStreamTrigger` pops elements from a stream and surfaces those elements to the function.
-The trigger polls Redis at a configurable fixed interval, and uses [`XREADGROUP`](https://redis.io/commands/xreadgroup/) to read elements from the stream.
-The consumer group for all function instances will be the ID of the function. For example, for the StreamTrigger function in [this sample](https://github.com/Azure/azure-functions-redis-extension/blob/main/samples/dotnet/RedisSamples.cs), the consumer group would be `Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisSamples.StreamTrigger`.
-Each function creates a new random GUID to use as its consumer name within the group to ensure that scaled out instances of the function don't read the same messages from the stream.
-
-#### Inputs for RedisStreamTrigger
--- `ConnectionStringSetting`: connection string to the redis cache, for example, `<cacheName>.redis.cache.windows.net:6380,password=...`.-- `Key`: Key or keys to read from, space-delimited.
- - Uses [`XREADGROUP`](https://redis.io/commands/xreadgroup/).
- - This field can be resolved using `INameResolver`.
-- (optional) `PollingIntervalInMs`: How often to poll Redis in milliseconds.
- - Default: 1000
-- (optional) `MessagesPerWorker`: How many messages each functions worker "should" process. Used to determine how many workers the function should scale to.
- - Default: 100
-- (optional) `Count`: Number of elements to pull from Redis at one time.
- - Default: 10
-- (optional) `DeleteAfterProcess`: If the listener will delete the stream entries after the function runs.
- - Default: false
-
-The following sample polls the key `streamTest` at a localhost Redis instance at `127.0.0.1:6379`:
--
-```csharp
-[FunctionName(nameof(StreamTrigger))]
-public static void StreamTrigger(
- [RedisStreamTrigger(ConnectionString = "127.0.0.1:6379", Keys = "streamTest")] RedisMessageModel model,
- ILogger logger)
-{
- logger.LogInformation(JsonSerializer.Serialize(model));
-}
-```
--
-```java
-@FunctionName("StreamTrigger")
- public void StreamTrigger(
- @RedisStreamTrigger(
- name = "entry",
- connectionStringSetting = "redisLocalhost",
- key = "streamTest",
- pollingIntervalInMs = 100,
- messagesPerWorker = 10,
- count = 1,
- deleteAfterProcess = true)
- String entry,
- final ExecutionContext context) {
- context.getLogger().info(entry);
- }
-```
---
-```json
-{
- "bindings": [
- {
- "type": "redisStreamTrigger",
- "deleteAfterProcess": false,
- "connectionStringSetting": "redisLocalhost",
- "key": "streamTest",
- "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
- "name": "entry",
- "direction": "in"
- }
- ],
- "scriptFile": "__init__.py"
-}
-```
--
-### Return values
-
-All triggers return a `RedisMessageModel` object that has two fields:
--- `Trigger`: The pubsub channel, list key, or stream key that the function is listening to.-- `Message`: The pubsub message, list element, or stream element.--
-```csharp
-namespace Microsoft.Azure.WebJobs.Extensions.Redis
-{
- public class RedisMessageModel
- {
- public string Trigger { get; set; }
- public string Message { get; set; }
- }
-}
-```
--
-```java
-public class RedisMessageModel {
- public String Trigger;
- public String Message;
-}
-```
---
-```python
-class RedisMessageModel:
- def __init__(self, trigger, message):
- self.Trigger = trigger
- self.Message = message
-```
--
-## Next steps
--- [Introduction to Azure Functions](/azure/azure-functions/functions-overview)-- [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md)-- [Using Azure Functions and Azure Cache for Redis to create a write-behind cache](cache-tutorial-write-behind.md)
azure-fluid-relay Connect Fluid Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/connect-fluid-azure-service.md
The sections below will explain how to use `AzureClient` in your own application
## Connecting to the service
-To connect to an Azure Fluid Relay instance, you first need to create an `AzureClient`. You must provide some configuration parameters including the tenant ID, service URL, and a token provider to generate the JSON Web Token (JWT) that will be used to authorize the current user against the service. The [@fluidframework/test-client-utils](https://fluidframework.com/docs/apis/test-client-utils/) package provides an [InsecureTokenProvider](https://fluidframework.com/docs/apis/test-client-utils/insecuretokenprovider-class) that can be used for development purposes.
+To connect to an Azure Fluid Relay instance, you first need to create an `AzureClient`. You must provide some configuration parameters including the tenant ID, service URL, and a token provider to generate the JSON Web Token (JWT) that will be used to authorize the current user against the service. The [@fluidframework/test-client-utils](https://fluidframework.com/docs/apis/test-client-utils/) package provides an InsecureTokenProvider that can be used for development purposes.
> [!CAUTION] > The `InsecureTokenProvider` should only be used for development purposes because **using it exposes the tenant key secret in your client-side code bundle.** This must be replaced with an implementation of [ITokenProvider](https://fluidframework.com/docs/apis/azure-client/itokenprovider-interface/) that fetches the token from your own backend service that is responsible for signing it with the tenant key. An example implementation is [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class). For more information, see [How to: Write a TokenProvider with an Azure Function](../how-tos/azure-function-token-provider.md). Note that the `id` and `name` fields are arbitrary.
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
If you don't have an Azure subscription, [create a free trial account](https://a
## Connect to Azure Fluid Relay
-You can connect to Azure Fluid Relay by providing the tenant ID and key that is uniquely generated for you when creating the Azure resource. You can build your own token provider implementation or you can use the two token provider implementations that the Fluid Framework provides: [InsecureTokenProvider](https://fluidframework.com/docs/apis/test-client-utils/insecuretokenprovider-class) and [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class).
+You can connect to Azure Fluid Relay by providing the tenant ID and key that is uniquely generated for you when creating the Azure resource. You can build your own token provider implementation or you can use the two token provider implementations that the Fluid Framework provides an [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class).
To learn more about using InsecureTokenProvider for local development, see [Connecting to the service](connect-fluid-azure-service.md#connecting-to-the-service) and [Authentication and authorization in your app](../concepts/authentication-authorization.md#the-token-provider).
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
This section shows how to work with the underlying HTTP request and response obj
> [!NOTE] > Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available.
-1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package, version 1.0.0-preview2 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/1.0.0-preview2) to your project.
+1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package, version 1.0.0-preview4 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/1.0.0-preview4) to your project.
You must also update your project to use [version 1.11.0 or later of Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.11.0) and [version 1.16.0 or later of Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.16.0).
azure-functions Azfd0002 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0002.md
description: "AZFD0002: Value of AzureWebJobsStorage app setting is invalid." - Previously updated : 09/03/2022+ Last updated : 08/10/2023 # AZFD0002: Value of AzureWebJobsStorage app setting is invalid.
The `AzureWebJobsStorage` app setting is used to store the connection string of
For more information, see [AzureWebJobsStorage](../../functions-app-settings.md#azurewebjobsstorage). ## How to resolve the event
-Update the value of the `AzureWebJobsStorage` app setting on your function app with a valid storage account connection string.
+Update the value of the `AzureWebJobsStorage` app setting on your function app with a valid storage account connection string. For more information, see [Troubleshoot error: "Azure Functions Runtime is unreachable"](../../functions-recover-storage-account.md).
## When to suppress the event
-You should suppress this event when your function app uses an Azure Key Vault reference in the `AzureWebjobsStorage` app setting instead of a connection string. For more information, see [Source application settings from Key Vault](../../../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#source-app-settings-from-key-vault)
+You should suppress this event when your function app uses an Azure Key Vault reference in the `AzureWebjobsStorage` app setting instead of a connection string. For more information, see [Source application settings from Key Vault](../../../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#source-app-settings-from-key-vault)
azure-functions Functions Bindings Cache Trigger Redislist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md
+
+ Title: Using RedisListTrigger Azure Function (preview)
+description: Learn how to use RedisListTrigger Azure Functions
+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++++ Last updated : 08/07/2023+++
+# RedisListTrigger Azure Function (preview)
+
+The `RedisListTrigger` pops new elements from a list and surfaces those entries to the function.
+
+## Scope of availability for functions triggers
+
+|Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
+||::|::|::|
+| Lists | Yes | Yes | Yes |
+
+> [!IMPORTANT]
+> Redis triggers are not currently supported on Azure Functions Consumption plan.
+>
+
+## Example
++
+The following sample polls the key `listTest` at a localhost Redis instance at `127.0.0.1:6379`:
+
+### [In-process](#tab/in-process)
+
+```csharp
+[FunctionName(nameof(ListsTrigger))]
+public static void ListsTrigger(
+ [RedisListTrigger("Redis", "listTest")] string entry,
+ ILogger logger)
+{
+ logger.LogInformation($"The entry pushed to the list listTest: '{entry}'");
+}
+```
+
+### [Isolated process](#tab/isolated-process)
+
+The isolated process examples aren't available in preview.
++++
+The following sample polls the key `listTest` at a localhost Redis instance at `redisLocalhost`:
+
+```java
+ @FunctionName("ListTrigger")
+ public void ListTrigger(
+ @RedisListTrigger(
+ name = "entry",
+ connectionStringSetting = "redisLocalhost",
+ key = "listTest",
+ pollingIntervalInMs = 100,
+ messagesPerWorker = 10,
+ count = 1,
+ listPopFromBeginning = false)
+ String entry,
+ final ExecutionContext context) {
+ context.getLogger().info(entry);
+ }
+```
++
+### [v3](#tab/javasscript-v1)
+
+Each sample uses the same `index.js` file, with binding data in the `function.json` file.
+
+Here's the `index.js` file:
+
+```javascript
+module.exports = async function (context, entry) {
+ context.log(entry);
+}
+```
+
+From `function.json`, here's the binding data:
+
+```javascript
+{
+ "bindings": [
+ {
+ "type": "redisListTrigger",
+ "listPopFromBeginning": true,
+ "connectionStringSetting": "redisLocalhost",
+ "key": "listTest",
+ "pollingIntervalInMs": 1000,
+ "messagesPerWorker": 100,
+ "count": 10,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+
+### [v4](#tab/javascript-v2)
+
+The JavaScript v4 programming model example isn't available in preview.
++++
+Each sample uses the same `run.ps1` file, with binding data in the `function.json` file.
+
+Here's the `run.ps1` file:
+
+```powershell
+param($entry, $TriggerMetadata)
+Write-Host $entry
+
+```
+
+From `function.json`, here's the binding data:
+
+```powershell
+{
+ "bindings": [
+ {
+ "type": "redisListTrigger",
+ "listPopFromBeginning": true,
+ "connectionStringSetting": "redisLocalhost",
+ "key": "listTest",
+ "pollingIntervalInMs": 1000,
+ "messagesPerWorker": 100,
+ "count": 10,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+```
++
+Each sample uses the same `__init__.py` file, with binding data in the `function.json` file.
+
+### [v1](#tab/python-v1)
+
+The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+
+Here's the `__init__.py` file:
+
+```python
+import logging
+
+def main(entry: str):
+ logging.info(entry)
+```
+
+From `function.json`, here's the binding data:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisListTrigger",
+ "listPopFromBeginning": true,
+ "connectionStringSetting": "redisLocalhost",
+ "key": "listTest",
+ "pollingIntervalInMs": 1000,
+ "messagesPerWorker": 100,
+ "count": 10,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "__init__.py"
+}
+```
+
+### [v2](#tab/python-v2)
+
+The Python v2 programming model example isn't available in preview.
++++
+## Attributes
+
+| Parameter | Description | Required | Default |
+|||:--:|--:|
+| `ConnectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`). | Yes | |
+| `Key` | Key to read from. This field can be resolved using `INameResolver`. | Yes | |
+| `PollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` |
+| `MessagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Optional | `100` |
+| `Count` | Number of entries to pop from Redis at one time. These are processed in parallel. Only supported on Redis 6.2+ using the `COUNT` argument in [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/). | Optional | `10` |
+| `ListPopFromBeginning` | Determines whether to pop entries from the beginning using [`LPOP`](https://redis.io/commands/lpop/), or to pop entries from the end using [`RPOP`](https://redis.io/commands/rpop/). | Optional | `true` |
++
+## Annotations
+
+| Parameter | Description | Required | Default |
+||-|:--:|--:|
+| `name` | "entry" | | |
+| `connectionStringSetting` | The name of the setting in the `appsettings` that contains the cache connection string. For example: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | |
+| `key` | This field can be resolved using INameResolver. | Yes | |
+| `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` |
+| `messagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Optional | `100` |
+| `count` | Number of entries to read from Redis at one time. These are processed in parallel. | Optional | `10` |
+| `listPopFromBeginning` | Whether to delete the stream entries after the function has run. | Yes | `true` |
++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+| function.json Property | Description | Optional | Default |
+||-|:--:|--:|
+| `type` | Name of the trigger. | No | |
+| `listPopFromBeginning` | Whether to delete the stream entries after the function has run. Set to `true`. | Yes | `true` |
+| `connectionString` | The name of the setting in the `appsettings` that contains the cache connection string. For example: `<cacheName>.redis.cache.windows.net:6380,password...` | No | |
+| `key` | This field can be resolved using `INameResolver`. | No | |
+| `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Yes | `1000` |
+| `messagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Yes | `100` |
+| `count` | Number of entries to read from the cache at one time. These are processed in parallel. | Yes | `10` |
+| `name` | ? | Yes | |
+| `direction` | Set to `in`. | No | |
++
+See the Example section for complete examples.
+
+## Usage
+
+The `RedisListTrigger` pops new elements from a list and surfaces those entries to the function. The trigger polls Redis at a configurable fixed interval, and uses [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/) to pop entries from the lists.
+
+### Output
++
+> [!NOTE]
+> Once the `RedisListTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+
+StackExchange.Redis.RedisValue
+
+| Output Type | Description |
+|||
+| [`StackExchange.Redis.RedisValue`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/RedisValue.cs) | `string`, `byte[]`, `ReadOnlyMemory<byte>`: The entry from the list. |
+| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` to a custom type. |
+++
+> [!NOTE]
+> Once the `RedisListTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+
+| Output Type | Description |
+|-|--|
+| `byte[]` | The message from the channel. |
+| `string` | The message from the channel. |
+| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. |
++++++
+## Related content
+
+- [Introduction to Azure Functions](functions-overview.md)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)
+- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis lists](https://redis.io/docs/data-types/lists/)
azure-functions Functions Bindings Cache Trigger Redispubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md
+
+ Title: Using RedisPubSubTrigger Azure Function (preview)
+description: Learn how to use RedisPubSubTrigger Azure Function
+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++++ Last updated : 08/07/2023+++
+# RedisPubSubTrigger Azure Function (preview)
+
+Redis features [publish/subscribe functionality](https://redis.io/docs/interact/pubsub/) that enables messages to be sent to Redis and broadcast to subscribers.
+
+## Scope of availability for functions triggers
+
+|Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
+||::|::|::|
+|Pub/Sub Trigger | Yes | Yes | Yes |
+
+> [!WARNING]
+> This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.
+>
+
+## Examples
+++
+### [In-process](#tab/in-process)
+
+This sample listens to the channel `pubsubTest`.
+
+```csharp
+[FunctionName(nameof(PubSubTrigger))]
+public static void PubSubTrigger(
+ [RedisPubSubTrigger("redisConnectionString", "pubsubTest")] string message,
+ ILogger logger)
+{
+ logger.LogInformation(message);
+}
+```
+
+This sample listens to any keyspace notifications for the key `myKey`.
+
+```csharp
+
+[FunctionName(nameof(KeyspaceTrigger))]
+public static void KeyspaceTrigger(
+ [RedisPubSubTrigger("redisConnectionString", "__keyspace@0__:myKey")] string message,
+ ILogger logger)
+{
+ logger.LogInformation(message);
+}
+```
+
+This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/).
+
+```csharp
+[FunctionName(nameof(KeyeventTrigger))]
+public static void KeyeventTrigger(
+ [RedisPubSubTrigger("redisConnectionString", "__keyevent@0__:del")] string message,
+ ILogger logger)
+{
+ logger.LogInformation(message);
+}
+```
+
+### [Isolated process](#tab/isolated-process)
+
+The isolated process examples aren't available in preview.
+
+```csharp
+//TBD
+```
++++
+This sample listens to the channel `pubsubTest`.
+
+```java
+@FunctionName("PubSubTrigger")
+ public void PubSubTrigger(
+ @RedisPubSubTrigger(
+ name = "message",
+ connectionStringSetting = "redisConnectionString",
+ channel = "pubsubTest")
+ String message,
+ final ExecutionContext context) {
+ context.getLogger().info(message);
+ }
+```
+
+This sample listens to any keyspace notifications for the key `myKey`.
+
+```java
+@FunctionName("KeyspaceTrigger")
+ public void KeyspaceTrigger(
+ @RedisPubSubTrigger(
+ name = "message",
+ connectionStringSetting = "redisConnectionString",
+ channel = "__keyspace@0__:myKey")
+ String message,
+ final ExecutionContext context) {
+ context.getLogger().info(message);
+ }
+```
+
+This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/).
+
+```java
+ @FunctionName("KeyeventTrigger")
+ public void KeyeventTrigger(
+ @RedisPubSubTrigger(
+ name = "message",
+ connectionStringSetting = "redisConnectionString",
+ channel = "__keyevent@0__:del")
+ String message,
+ final ExecutionContext context) {
+ context.getLogger().info(message);
+ }
+```
++
+### [v3](#tab/javasscript-v1)
+
+Each sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
+
+Here's the `index.js` file:
+
+```javascript
+module.exports = async function (context, message) {
+ context.log(message);
+}
+```
+
+From `function.json`:
+
+Here's binding data to listen to the channel `pubsubTest`.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "pubsubTest",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+
+Here's binding data to listen to keyspace notifications for the key `myKey`.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "__keyspace@0__:myKey",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+
+Here's binding data to listen to `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/).
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "__keyevent@0__:del",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+### [v4](#tab/javascript-v2)
+
+The JavaScript v4 programming model example isn't available in preview.
+++
+Each sample uses the same `run.ps1` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
+
+Here's the `run.ps1` file:
+
+```powershell
+param($message, $TriggerMetadata)
+Write-Host $message
+```
+
+From `function.json`:
+
+Here's binding data to listen to the channel `pubsubTest`.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "pubsubTest",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+```
+
+Here's binding data to listen to keyspace notifications for the key `myKey`.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "__keyspace@0__:myKey",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+```
+
+Here's binding data to listen to `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/).
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "__keyevent@0__:del",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+```
++
+### [v1](#tab/python-v1)
+
+The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+
+Each sample uses the same `__init__.py` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
+
+Here's the `__init__.py` file:
+
+```python
+import logging
+
+def main(message: str):
+ logging.info(message)
+```
+
+From `function.json`:
+
+Here's binding data to listen to the channel `pubsubTest`.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "pubsubTest",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "__init__.py"
+}
+```
+
+Here's binding data to listen to keyspace notifications for the key `myKey`.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "__keyspace@0__:myKey",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "__init__.py"
+}
+```
+
+Here's binding data to listen to `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/).
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connectionStringSetting": "redisConnectionString",
+ "channel": "__keyevent@0__:del",
+ "name": "message",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "__init__.py"
+}
+```
+
+### [v2](#tab/python-v2)
+
+The Python v2 programming model example isn't available in preview.
++++
+## Attributes
+
+| Parameter | Description | Required | Default |
+||--|:--:| --:|
+| `ConnectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string. For example,`<cacheName>.redis.cache.windows.net:6380,password=...`. | Yes | |
+| `Channel` | The pub sub channel that the trigger should listen to. Supports glob-style channel patterns. This field can be resolved using `INameResolver`. | Yes | |
++
+## Annotations
+
+| Parameter | Description | Required | Default |
+||--|: --:| --:|
+| `name` | Name of the variable holding the value returned by the function. | Yes | |
+| `connectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`) | Yes | |
+| `channel` | The pub sub channel that the trigger should listen to. Supports glob-style channel patterns. | Yes | |
++
+## Configuration
+
+| function.json property | Description | Required | Default |
+||--| :--:| --:|
+| `type` | Trigger type. For the pub sub trigger, this is `redisPubSubTrigger`. | Yes | |
+| `connectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`) | Yes | |
+| `channel` | Name of the pub sub channel that is being subscribed to | Yes | |
+| `name` | Name of the variable holding the value returned by the function. | Yes | |
+| `direction` | Must be set to `in`. | Yes | |
++
+>[!IMPORTANT]
+>The `connectionStringSetting` parameter does not hold the Redis cache connection string itself. Instead, it points to the name of the environment variable that holds the connection string. This makes the application more secure. For more information, see [Redis connection string](functions-bindings-cache.md#redis-connection-string).
+>
+
+## Usage
+
+Redis features [publish/subscribe functionality](https://redis.io/docs/interact/pubsub/) that enables messages to be sent to Redis and broadcast to subscribers. The `RedisPubSubTrigger` enables Azure Functions to be triggered on pub/sub activity. The `RedisPubSubTrigger`subscribes to a specific channel pattern using [`PSUBSCRIBE`](https://redis.io/commands/psubscribe/), and surfaces messages received on those channels to the function.
+
+### Prerequisites and limitations
+
+- The `RedisPubSubTrigger` isn't capable of listening to [keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/) on clustered caches.
+- Basic tier functions don't support triggering on `keyspace` or `keyevent` notifications through the `RedisPubSubTrigger`.
+- The `RedisPubSubTrigger` isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.
+- Functions with the `RedisPubSubTrigger` shouldn't be scaled out to multiple instances. Each instance listens and processes each pub sub message, resulting in duplicate processing
+
+> [!WARNING]
+> This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.
+>
+
+## Triggering on keyspace notifications
+
+Redis offers a built-in concept called [keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/). When enabled, this feature publishes notifications of a wide range of cache actions to a dedicated pub/sub channel. Supported actions include actions that affect specific keys, called _keyspace notifications_, and specific commands, called _keyevent notifications_. A huge range of Redis actions are supported, such as `SET`, `DEL`, and `EXPIRE`. The full list can be found in the [keyspace notification documentation](https://redis.io/docs/manual/keyspace-notifications/).
+
+The `keyspace` and `keyevent` notifications are published with the following syntax:
+
+```
+PUBLISH __keyspace@0__:<affectedKey> <command>
+PUBLISH __keyevent@0__:<affectedCommand> <key>
+```
+
+Because these events are published on pub/sub channels, the `RedisPubSubTrigger` is able to pick them up. See the [RedisPubSubTrigger](functions-bindings-cache-trigger-redispubsub.md) section for more examples.
+
+> [!IMPORTANT]
+> In Azure Cache for Redis, `keyspace` events must be enabled before notifications are published. For more information, see [Advanced Settings](/azure/azure-cache-for-redis/cache-configure#keyspace-notifications-advanced-settings).
+
+## Output
++
+> [!NOTE]
+> Once the `RedisPubSubTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
++
+| Output Type | Description|
+|||
+| [`StackExchange.Redis.ChannelMessage`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/ChannelMessageQueue.cs)| The value returned by `StackExchange.Redis`. |
+| [`StackExchange.Redis.RedisValue`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/RedisValue.cs)| `string`, `byte[]`, `ReadOnlyMemory<byte>`: The message from the channel. |
+| `Custom`| The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. |
+++
+> [!NOTE]
+> Once the `RedisPubSubTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+
+| Output Type | Description |
+|-|--|
+| `byte[]` | The message from the channel. |
+| `string` | The message from the channel. |
+| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. |
++++++
+## Related content
+
+- [Introduction to Azure Functions](functions-overview.md)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)
+- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis pub sub messages](https://redis.io/docs/manual/pubsub/)
azure-functions Functions Bindings Cache Trigger Redisstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md
+
+ Title: Using RedisStreamTrigger Azure Function (preview)
+description: Learn how to use RedisStreamTrigger Azure Function
+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++++ Last updated : 08/07/2023+++
+# RedisStreamTrigger Azure Function (preview)
+
+The `RedisStreamTrigger` reads new entries from a stream and surfaces those elements to the function.
+
+| Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
+||:--:|:--:|:-:|
+| Streams | Yes | Yes | Yes |
+
+> [!IMPORTANT]
+> Redis triggers are not currently supported on Azure Functions Consumption plan.
+>
+
+## Example
+++
+### [In-process](#tab/in-process)
+
+```csharp
+
+[FunctionName(nameof(StreamsTrigger))]
+public static void StreamsTrigger(
+ [RedisStreamTrigger("Redis", "streamTest")] string entry,
+ ILogger logger)
+{
+ logger.LogInformation($"The entry pushed to the list listTest: '{entry}'");
+}
+```
+
+### [Isolated process](#tab/isolated-process)
+
+The isolated process examples aren't available in preview.
+
+```csharp
+//TBD
+```
++++
+```java
+
+ @FunctionName("StreamTrigger")
+ public void StreamTrigger(
+ @RedisStreamTrigger(
+ name = "entry",
+ connectionStringSetting = "redisLocalhost",
+ key = "streamTest",
+ pollingIntervalInMs = 100,
+ messagesPerWorker = 10,
+ count = 1,
+ deleteAfterProcess = true)
+ String entry,
+ final ExecutionContext context) {
+ context.getLogger().info(entry);
+ }
+
+```
++
+### [v3](#tab/javasscript-v1)
+
+Each sample uses the same `index.js` file, with binding data in the `function.json` file.
+
+Here's the `index.js` file:
+
+```javascript
+module.exports = async function (context, entry) {
+ context.log(entry);
+}
+```
+
+From `function.json`, here's the binding data:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisStreamTrigger",
+ "deleteAfterProcess": false,
+ "connectionStringSetting": "redisLocalhost",
+ "key": "streamTest",
+ "pollingIntervalInMs": 1000,
+ "messagesPerWorker": 100,
+ "count": 10,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+
+### [v4](#tab/javascript-v2)
+
+The JavaScript v4 programming model example isn't available in preview.
++++
+Each sample uses the same `run.ps1` file, with binding data in the `function.json` file.
+
+Here's the `run.ps1` file:
+
+```powershell
+param($entry, $TriggerMetadata)
+Write-Host ($entry | ConvertTo-Json)
+```
+
+From `function.json`, here's the binding data:
+
+```powershell
+{
+ "bindings": [
+ {
+ "type": "redisStreamTrigger",
+ "deleteAfterProcess": false,
+ "connectionStringSetting": "redisLocalhost",
+ "key": "streamTest",
+ "pollingIntervalInMs": 1000,
+ "messagesPerWorker": 100,
+ "count": 10,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+```
++
+### [v1](#tab/python-v1)
+
+The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+
+Each sample uses the same `__init__.py` file, with binding data in the `function.json` file.
+
+Here's the `__init__.py` file:
+
+```python
+import logging
+
+def main(entry: str):
+ logging.info(entry)
+```
+
+From `function.json`, here's the binding data:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisStreamTrigger",
+ "deleteAfterProcess": false,
+ "connectionStringSetting": "redisLocalhost",
+ "key": "streamTest",
+ "pollingIntervalInMs": 1000,
+ "messagesPerWorker": 100,
+ "count": 10,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "__init__.py"
+}
+```
+
+### [v2](#tab/python-v2)
+
+The Python v2 programming model example isn't available in preview.
++++
+## Attributes
+
+| Parameters | Description | Required | Default |
+||-|:--:|--:|
+| `ConnectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | |
+| `Key` | Key to read from. | Yes | |
+| `PollingIntervalInMs` | How often to poll the Redis server in milliseconds. | Optional | `1000` |
+| `MessagesPerWorker` | The number of messages each functions worker should process. Used to determine how many workers the function should scale to. | Optional | `100` |
+| `Count` | Number of elements to pull from Redis at one time. | Optional | `10` |
+| `DeleteAfterProcess` | Indicates if the function deletes the stream entries after processing. | Optional | `false` |
++
+## Annotations
+
+| Parameter | Description | Required | Default |
+||-|:--:|--:|
+| `name` | `entry` | Yes | |
+| `connectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | |
+| `key` | Key to read from. | Yes | |
+| `pollingIntervalInMs` | How frequently to poll Redis, in milliseconds. | Optional | `1000` |
+| `messagesPerWorker` | The number of messages each functions worker should process. It's used to determine how many workers the function should scale to | Optional | `100` |
+| `count` | Number of entries to read from Redis at one time. These are processed in parallel. | Optional | `10` |
+| `deleteAfterProcess` | Whether to delete the stream entries after the function has run. | Optional | `false` |
++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+| function.json Properties | Description | Required | Default |
+||-|:--:|--:|
+| `type` | | Yes | |
+| `deleteAfterProcess` | | Optional | `false` |
+| `connectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | |
+| `key` | The key to read from. | Yes | |
+| `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` |
+| `messagesPerWorker` | (optional) The number of messages each functions worker should process. Used to determine how many workers the function should scale | Optional | `100` |
+| `count` | Number of entries to read from Redis at one time. These are processed in parallel. | Optional | `10` |
+| `name` | | Yes | |
+| `direction` | | Yes | |
++
+See the Example section for complete examples.
+
+## Usage
+
+The `RedisStreamTrigger` Azure Function reads new entries from a stream and surfaces those entries to the function.
+
+The trigger polls Redis at a configurable fixed interval, and uses [`XREADGROUP`](https://redis.io/commands/xreadgroup/) to read elements from the stream.
+
+The consumer group for all function instances is the `ID` of the function. For example, `Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisSamples.StreamTrigger` for the `StreamTrigger` sample. Each function creates a new random GUID to use as its consumer name within the group to ensure that scaled out instances of the function don't read the same messages from the stream.
+
+### Output
++
+> [!NOTE]
+> Once the `RedisStreamTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+
+| Output Type | Description |
+|-|--|
+| [`StackExchange.Redis.ChannelMessage`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/ChannelMessageQueue.cs) | The value returned by `StackExchange.Redis`. |
+| `StackExchange.Redis.NameValueEntry[]`, `Dictionary<string, string>` | The values contained within the entry. |
+| `string, byte[], ReadOnlyMemory<byte>` | The stream entry serialized as JSON (UTF-8 encoded for byte types) in the following format: `{"Id":"1658354934941-0","Values":{"field1":"value1","field2":"value2","field3":"value3"}}` |
+| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. |
+++
+> [!NOTE]
+> Once the `RedisStreamTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+
+| Output Type | Description |
+|-|--|
+| `byte[]` | The message from the channel. |
+| `string` | The message from the channel. |
+| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. |
+++++
+## Related content
+
+- [Introduction to Azure Functions](functions-overview.md)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)
+- [Using Azure Functions and Azure Cache for Redis to create a write-behind cache](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis streams](https://redis.io/docs/data-types/streams/)
azure-functions Functions Bindings Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md
+
+ Title: Using Azure Functions for Azure Cache for Redis (preview)
+description: Learn how to use Azure Functions Azure Cache for Redis
+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++++ Last updated : 07/26/2023+++
+# Overview of Azure functions for Azure Cache for Redis (preview)
+
+This article describes how to use Azure Cache for Redis with Azure Functions to create optimized serverless and event-driven architectures.
+
+Azure Functions provide an event-driven programming model where triggers and bindings are key features. With Azure Functions, you can easily build event-driven serverless applications. Azure Cache for Redis provides a set of building blocks and best practices for building distributed applications, including microservices, state management, pub/sub messaging, and more.
+
+Azure Cache for Redis can be used as a trigger for Azure Functions, allowing you to initiate a serverless workflow. This functionality can be highly useful in data architectures like a write-behind cache, or any event-based architectures.
+
+You can integrate Azure Cache for Redis and Azure Functions to build functions that react to events from Azure Cache for Redis or external systems.
+
+| Action | Direction | Type | Preview |
+||--|||
+| Triggers on Redis pub sub messages | N/A | [RedisPubSubTrigger](functions-bindings-cache-trigger-redispubsub.md) | Yes|
+| Triggers on Redis lists | N/A | [RedisListsTrigger](functions-bindings-cache-trigger-redislist.md) | Yes |
+| Triggers on Redis streams | N/A | [RedisStreamsTrigger](functions-bindings-cache-trigger-redisstream.md) | Yes |
+
+## Scope of availability for functions triggers
+
+|Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
+||::|::|::|
+|Pub/Sub | Yes | Yes | Yes |
+|Lists | Yes | Yes | Yes |
+|Streams | Yes | Yes | Yes |
+
+> [!IMPORTANT]
+> Redis triggers are not currently supported on consumption functions.
+>
++
+## Install extension
+
+### [In-process](#tab/in-process)
+
+Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Redis).
+
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
+```
+
+### [Isolated process](#tab/isolated-process)
+
+Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+
+Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Redis).
+
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Redis --prerelease
+```
+++
+## Install bundle
++
+1. Create a Java function project. You could use Maven:
+ `mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=8`
+
+1. Add the extension bundle by adding or replacing the following code in your _host.json_ file:
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.11.*, 5.0.0)"
+ }
+ }
+ ```
+
+ >[!WARNING]
+ >The Redis extension is currently only available in a preview bundle release.
+ >
+
+1. Add the Java library for Redis bindings to the `pom.xml` file:
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure.functions</groupId>
+ <artifactId>azure-functions-java-library-redis</artifactId>
+ <version>${azure.functions.java.library.redis.version}</version>
+ </dependency>
+ ```
++
+1. Add the extension bundle by adding or replacing the following code in your _host.json_ file:
+
+ <!-- I don't see this in the samples. -->
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.11.*, 5.0.0)"
+ }
+ }
+ ```
+
+ >[!WARNING]
+ >The Redis extension is currently only available in a preview bundle release.
+ >
++
+## Redis connection string
+
+Azure Cache for Redis triggers and bindings have a required property for the cache connection string. The connection string can be found on the [**Access keys**](/azure/azure-cache-for-redis/cache-configure#access-keys) menu in the Azure Cache for Redis portal. The Redis trigger or binding looks for an environmental variable holding the connection string with the name passed to the `ConnectionStringSetting` parameter. In local development, the `ConnectionStringSetting` can be defined using the [local.settings.json](/azure/azure-functions/functions-develop-local#local-settings-file) file. When deployed to Azure, [application settings](/azure/azure-functions/functions-how-to-use-azure-function-app-settings) can be used.
+
+## Related content
+
+- [Introduction to Azure Functions](/azure/azure-functions/functions-overview)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)
+- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
The following example shows an HTTP trigger that returns a "hello world" respons
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Http/HttpFunction.cs" id="docsnippet_http_trigger":::
-The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview):
+The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated]:
```csharp [Function("HttpFunction")]
The [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger
+ Any plain-old Java object (POJO) type. ::: zone-end + ### Payload
+# [In-process](#tab/in-process)
+ The trigger input type is declared as either `HttpRequest` or a custom type. If you choose `HttpRequest`, you get full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
+# [Isolated process](#tab/isolated-process)
+
+The trigger input type is declared as one of the following types:
+
+| Type | Description |
+|-|-|
+| [HttpRequestData] | A projection of the full request object. |
+| [HttpRequest] | _Use of this type requires that the app is configured with [ASP.NET Core integration in .NET Isolated]._<br/>This gives you full access to the request object and overall HttpContext. |
+| A custom type | When the body of the request is JSON, the runtime will try to parse it to set the object properties. |
+
+When using `HttpRequestData` or `HttpRequest`, custom types can also be bound to additional parameters using `Microsoft.Azure.Functions.Worker.Http.FromBodyAttribute`. Use of this attribute requires [`Microsoft.Azure.Functions.Worker.Extensions.Http` version 3.1.0 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http). Note that this is a different type than the similar attribute in `Microsoft.AspNetCore.Mvc`, and when using ASP.NET Core integration, you will need a fully qualified reference or `using` statement. The following example shows how to use the attribute to get just the body contents while still having access to the full `HttpRequest`, using the ASP.NET Core integration:
+
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using FromBodyAttribute = Microsoft.Azure.Functions.Worker.Http.FromBodyAttribute;
+
+namespace AspNetIntegration
+{
+ public class BodyBindingHttpTrigger
+ {
+ [Function(nameof(BodyBindingHttpTrigger))]
+ public IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req,
+ [FromBody] Person person)
+ {
+ return new OkObjectResult(person);
+ }
+ }
+
+ public record Person(string Name, int Age);
+}
+```
++++ ### Customize the HTTP endpoint By default when you create a function for an HTTP trigger, the function is addressable with a route of the form:
If a function that uses the HTTP trigger doesn't complete within 230 seconds, th
- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md) [ClaimsPrincipal]: /dotnet/api/system.security.claims.claimsprincipal
+[ASP.NET Core integration in .NET Isolated]: ./dotnet-isolated-process-guide.md#aspnet-core-integration-preview
+[HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata
+[HttpResponseData]: /dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata
+[HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest
+[HttpResponse]: /dotnet/api/microsoft.aspnetcore.http.httpresponse
+[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
# SignalR Service input binding for Azure Functions
-Before a client can connect to Azure SignalR Service, it must retrieve the service endpoint URL and a valid access token. The *SignalRConnectionInfo* input binding produces the SignalR Service endpoint URL and a valid token that are used to connect to the service. Because the token is time-limited and can be used to authenticate a specific user to a connection, you should not cache the token or share it between clients. An HTTP trigger using this binding can be used by clients to retrieve the connection information.
-
-For more information on how this binding is used to create a "negotiate" function that can be consumed by a SignalR client SDK, see the [Azure Functions development and configuration article](../azure-signalr/signalr-concept-serverless-development-config.md) in the SignalR Service concepts documentation.
+Before a client can connect to Azure SignalR Service, it must retrieve the service endpoint URL and a valid access token. The *SignalRConnectionInfo* input binding produces the SignalR Service endpoint URL and a valid token that are used to connect to the service. The token is time-limited and can be used to authenticate a specific user to a connection. Therefore, you shouldn't cache the token or share it between clients. Usually you use *SignalRConnectionInfo* with HTTP trigger for clients to retrieve the connection information.
+For more information on how to use this binding to create a "negotiate" function that is compatible with a SignalR client SDK, see [Azure Functions development and configuration with Azure SignalR Service](../azure-signalr/signalr-concept-serverless-development-config.md).
For information on setup and configuration details, see the [overview](functions-bindings-signalr-service.md). ## Example
public static SignalRConnectionInfo Negotiate(
# [Isolated process](#tab/isolated-process)
-The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The data required to connect to the output binding is obtained as a `MyConnectionInfo` object from an input binding defined using a `SignalRConnectionInfo` attribute.
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalRNegotiationFunctions.cs" id="snippet_negotiate":::
public SignalRConnectionInfo negotiate(
### Authenticated tokens
-When the function is triggered by an authenticated client, you can add a user ID claim to the generated token. You can easily add authentication to a function app using [App Service Authentication](../app-service/overview-authentication-authorization.md).
+When an authenticated client triggers the function, you can add a user ID claim to the generated token. You can easily add authentication to a function app using [App Service Authentication](../app-service/overview-authentication-authorization.md).
App Service authentication sets HTTP headers named `x-ms-client-principal-id` and `x-ms-client-principal-name` that contain the authenticated user's client principal ID and name, respectively.
App Service authentication sets HTTP headers named `x-ms-client-principal-id` an
# [In-process](#tab/in-process)
-You can set the `UserId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
+You can set the `UserId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
```cs [FunctionName("negotiate")]
public static string Negotiate([HttpTrigger(AuthorizationLevel.Anonymous)] HttpR
# [C# Script](#tab/csharp-script)
-You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
+You can set the `userId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
Example function.json:
public SignalRConnectionInfo negotiate(
::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell"
-You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
+You can set the `userId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
Here's binding data in the *function.json* file:
def main(req: func.HttpRequest, connectionInfo: str) -> func.HttpResponse:
::: zone-end ::: zone pivot="programming-language-java"
-You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
+You can set the `userId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
```java @FunctionName("negotiate")
The following table explains the properties of the `SignalRConnectionInfo` attri
| Attribute property |Description| ||-|
-**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated. |
-|**UserId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**HubName**| Required. The hub name. |
|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+|**UserId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**IdToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **ClaimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**ClaimTypeList**| Optional. A list of claim types, which filter the claims in **IdToken** . |
# [Isolated process](#tab/isolated-process)
The following table explains the properties of the `SignalRConnectionInfoInput`
| Attribute property |Description| ||-|
-**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated. |
-|**UserId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**HubName**| Required. The hub name. |
|**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+|**UserId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**IdToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **ClaimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**ClaimTypeList**| Optional. A list of claim types, which filter the claims in **IdToken** . |
# [C# Script](#tab/csharp-script)
The following table explains the binding configuration properties that you set i
|**type**| Must be set to `signalRConnectionInfo`.| |**direction**| Must be set to `in`.| |**name**| Variable name used in function code for connection info object. |
-|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
-|**userId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**hubName**| Required. The hub name. |
|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+|**userId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**idToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **claimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**claimTypeList**| Optional. A list of claim types, which filter the claims in **idToken** . |
::: zone-end ::: zone pivot="programming-language-java" + ## Annotations The following table explains the supported settings for the `SignalRConnectionInfoInput` annotation.
The following table explains the supported settings for the `SignalRConnectionIn
|Setting | Description| ||--| |**name**| Variable name used in function code for connection info object. |
-|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
-|**userId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**hubName**| Required. The hub name. |
|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+|**userId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**idToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **claimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**claimTypeList**| Optional. A list of claim types, which filter the claims in **idToken** . |
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python"
The following table explains the binding configuration properties that you set i
||--| |**type**| Must be set to `signalRConnectionInfo`.| |**direction**| Must be set to `in`.|
-|**name**| Variable name used in function code for connection info object. |
-|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
-|**userId**| Optional: The value of the user identifier claim to be set in the access key token. |
+|**hubName**| Required. The hub name. |
|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+|**userId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**idToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **claimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
+|**claimTypeList**| Optional. A list of claim types, which filter the claims in **idToken** . |
::: zone-end
+### Binding expressions for HTTP trigger
+<a name="binding-expressions-for-http-trigger"></a>
+It's a common scenario that the values of some attributes of SignalR input binding come from HTTP requests. Therefore, we show how to bind values from HTTP requests to SignalR input binding attributes via [binding expression](./functions-bindings-expressions-patterns.md#trigger-metadata).
+
+| HTTP metadata type | Binding expression format | Description | Example |
+||--||--|
+| HTTP request query | `{query.QUERY_PARAMETER_NAME}` | Binds the value of corresponding query parameter to an attribute | `{query.userName}` |
+| HTTP request header | `{headers.HEADER_NAME}` | Binds the value of a header to an attribute | `{headers.token}` |
+ ## Next steps - [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)
azure-functions Functions Deploy Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deploy-container-apps.md
az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --envir
::: zone-end ::: zone pivot="programming-language-typescript" ```console
-az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --environment MyContainerappEnvironment --resource-group AzureFunctionsContainers-rg --functions-version 4 --runtime node --image <LOGIN_SERVER>/azurefunctionsimage:v1.0.0 --registry-username <REGISTRY_NAME> --registry-password <ADMIN_PASSWORD>
+az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --environment MyContainerappEnvironment --resource-group AzureFunctionsContainers-rg --functions-version 4 --runtime node --image <LOGIN_SERVER>/azurefunctionsimage:v1.0.0 --registry-server <LOGIN_SERVER> --registry-username <REGISTRY_NAME> --registry-password <ADMIN_PASSWORD>
``` ::: zone-end
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
|Language Versions |EOL Date |Retirement Date| |--|--|-|
+|Python 3.7|27 June 2023|30 September 2023|
|Node 14|30 April 2023|30 June 2024| |Node 16|11 September 2023|30 June 2024| + ## Language version support timeline To learn more about specific language version support policy timeline, visit the following external resources:
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
The [Get Map Tile V2 API] allows you to request past, current, and future radar
![Example of map with real-time weather radar tiles](media/about-azure-maps/intro_weather.png)
-### Maps Creator service
-
-Maps Creator service is a suite of web services that developers can use to create applications with map features based on indoor map data.
-
-Maps Creator provides the following
-
-* [Dataset service]. Use the Dataset service to create a dataset from a converted drawing package data. For information about drawing package requirements, see drawing package requirements.
-
-* [Conversion service]. Use the Conversion service to convert a DWG design file into drawing package data for indoor maps.
-
-* [Tileset service]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.
-
-* [Custom styling service] (preview). Use the [style service] or [visual style editor] to customize the visual elements of an indoor map.
-
-* [Feature State service]. Use the Feature State service to support dynamic map styling. Dynamic map styling allows applications to reflect real-time events on spaces provided by IoT systems.
-
-* [WFS service]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API] standards for querying a single dataset.
-
-* [Wayfinding service] (preview). Use the [wayfinding API] to generate a path between two points within a facility. Use the [routeset API] to create the data that the wayfinding service needs to generate paths.
- ## Programming model Azure Maps is built for mobility and can help you develop cross-platform applications. It uses a programming model that's language agnostic and supports JSON output through [REST APIs].
Verify that the location of your current IP address is in a supported country/re
## Next steps
+Learn about indoor maps:
+
+[What is Azure Maps Creator?]
+ Try a sample app that showcases Azure Maps: [Quickstart: Create a web app]
Stay up to date on Azure Maps:
[Azure Maps blog] <! learn.microsoft.com links >
-[Conversion service]: creator-indoor-maps.md#convert-a-drawing-package
-[Custom styling service]: creator-indoor-maps.md#custom-styling-preview
-[Dataset service]: creator-indoor-maps.md#datasets
-[Feature State service]: creator-indoor-maps.md#feature-statesets
[Get started with Azure Maps Power BI visual]: power-bi-visual-get-started.md [How to use the Get Map Attribution API]: how-to-show-attribution.md [Quickstart: Create a web app]: quick-demo-map-app.md
-[Tileset service]: creator-indoor-maps.md#tilesets
-[Wayfinding service]: creator-indoor-maps.md#wayfinding-preview
-[WFS service]: creator-indoor-maps.md#web-feature-service-api
+[What is Azure Maps Creator?]: about-creator.md
<! REST API Links > [Data service]: /rest/api/maps/data-v2 [Geolocation service]: /rest/api/maps/geolocation
Stay up to date on Azure Maps:
[Render V2 service]: /rest/api/maps/render-v2 [REST APIs]: /rest/api/maps/ [Route service]: /rest/api/maps/route
-[routeset API]: /rest/api/maps/v20220901preview/routeset
[Search service]: /rest/api/maps/search [Spatial service]: /rest/api/maps/spatial
-[style service]: /rest/api/maps/v20220901preview/style
[TilesetID]: /rest/api/maps/render-v2/get-map-tile#tilesetid [Time zone service]: /rest/api/maps/timezone [Traffic service]: /rest/api/maps/traffic
-[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
<! JavaScript API Links > [JavaScript map control]: /javascript/api/azure-maps-control <! External Links >
Stay up to date on Azure Maps:
[Azure portal]: https://portal.azure.com [IANA ID]: https://www.iana.org/ [Microsoft Trust Center]: https://www.microsoft.com/trust-center/privacy
-[Open Geospatial Consortium API]: https://docs.opengeospatial.org/is/17-069r3/17-069r3.html
[reverse geocode]: https://en.wikipedia.org/wiki/Reverse_geocoding [Subprocessor List]: https://servicetrust.microsoft.com/DocumentPage/aead9e68-1190-4d90-ad93-36418de5c594
-[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
azure-maps About Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-creator.md
+
+ Title: Overview for Microsoft Azure Maps Creator
+
+description: Learn about services and capabilities in Microsoft Azure Maps Creator and how to use them in your applications.
++ Last updated : 08/03/2023+++++
+# What is Azure Maps Creator?
+
+Azure Maps Creator is a first party geospatial platform that enables you to create and render maps, based on indoor map data, on top of the outdoor map in your web and mobile applications.
+
+## Services in Azure Maps Creator
+
+Creator is a platform for building indoor mapping solutions for all your needs. As an extension of Azure Maps, Creator allows blending of both indoor and outdoor maps for a seamless visual experience. Creator supports generating indoor maps from CAD drawings (DWG) or GeoJSON and enables custom styling of the map. You can also provide directions within your indoor map using wayfinding.
++
+### Conversion
+
+An [onboarding tool] is provided to prepare your facility's DWGs by identifying the data to use and to positioning your facility on the map. The conversion service then converts the geometry and data from your DWG files into a digital indoor map.
+
+The first step in creating your indoor map is to upload a drawing package into your Azure Maps account. A drawing package contains one or more CAD (computer-aided design) drawings of your facility along with a manifest describing the drawings. The drawings define the elements of the facility while the manifest tells the Azure Maps [Conversion service] how to read the facility drawing files and metadata. For more
+information about manifest properties, see [Manifest file requirements] and for more information on creating and uploading a drawing package, see the [Drawing package guide].
+
+### Dataset
+
+A collection of the indoor map [features] of a facility. Update your facility dataset through a visual editor and query for features in real time using the [Features API]. For more information, see [Work with datasets using the QGIS plugin].
+
+### Rendering
+
+[Tilesets], created from your data, are used to render maps on mobile devices or in the browser.
+
+### Styling
+
+[Custom styling] enables you to customize your indoor maps to meet your needs. You can customize your facilityΓÇÖs look and feel to reflect your brand colors or emphasize different rooms or specific areas of interest. Everything is configurable from the color of a feature, an icon that renders, or the zoom level when a feature should appear, resize or disappear. You can define how your data should be styled in the [visual style editor]. For more information, see [Create custom styles for indoor maps].
+
+### Wayfinding
+
+A [Routeset] is automatically created for your facility. [Wayfinding] uses that routeset to provide your customers with the shortest path between two points using the [Wayfinding service].
+
+### SDK
+
+Use the Azure Maps Web SDK to develop applications that provide a customized indoor map experience. For more information, see [Use the Azure Maps Indoor Maps module].
+
+## The indoor maps workflow
+
+This section provides a high-level overview of the indoor map creation workflow.
+
+1. **Create**. You first must create a drawing package containing one or more CAD
+ (computer-aided design) drawings of your facility along with a [manifest]
+ describing the drawings. You can use the [Azure Maps Creator onboarding tool] to
+ create new and edit existing [manifest files].
+
+1. **Upload**. Upload your drawing packages into your Azure Maps
+ account. Upload drawing packages using the [Data Upload API].
+
+1. **Convert**. Once the drawing package is uploaded into your Azure Maps account,
+ use the [Conversion service] to validate the data in the uploaded drawing
+ package and convert it into map data.
+
+1. **Dataset**. Create a [dataset] from the map data. A dataset is collection
+ of indoor map [features] that are stored in your Azure Maps account.
+ For more information, see [Work with datasets using the QGIS plugin].
+
+1. **Tileset**. Converting your data into a [tileset] allows
+ you to add it to an Azure Maps map and apply custom styling.
+
+1. **Styles**. Styles drive the visual appearance of spatial features on the map.
+ When a new tileset is created, default styles are automatically associated with the
+ features it contains. These default styles can be modified to suit your needs
+ using the [visual style editor]. For more information, see
+ [Create custom styles for indoor maps].
+
+1. **Wayfinding**. Provide your customers with the shortest path between two points
+ within a facility. For more information, see [Wayfinding].
+
+## Azure Maps Creator documentation
+
+### ![Concept articles](./media/creator-indoor-maps/about-creator/Concepts.png) Concepts
+
+- [Indoor map concepts]
+
+### ![Creator tutorial](./media/creator-indoor-maps/about-creator/tutorials.png) Tutorials
+
+- [Use Azure Maps Creator to create indoor maps]
+
+### ![How-to articles](./media/creator-indoor-maps/about-creator/how-to-guides.png) How-to guides
+
+- [Manage Creator]
+- [Implement Dynamic styling for indoor maps]
+- [Query datasets with WFS API]
+- [Custom styling for indoor maps]
+- [Indoor maps wayfinding service]
+- [Edit indoor maps using the QGIS plugin]
+- [Create dataset using GeoJson package]
+- [Create a feature stateset]
+
+### ![Reference articles](./media/creator-indoor-maps/about-creator/reference.png) Reference
+
+- [Drawing package requirements]
+- [Facility Ontology]
+- [Dynamic maps StylesObject]
+- [Drawing error visualizer]
+- [Azure Maps Creator REST API]
+
+[Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
+[Azure Maps Creator REST API]: /rest/api/maps-creator
+[Conversion service]: /rest/api/maps/v2/conversion
+[Create a feature stateset]: how-to-creator-feature-stateset.md
+[Create custom styles for indoor maps]: how-to-create-custom-styles.md
+[Create dataset using GeoJson package]: how-to-dataset-geojson.md
+[Custom styling for indoor maps]: how-to-create-custom-styles.md
+[custom styling]: creator-indoor-maps.md#custom-styling-preview
+[Data Upload API]: /rest/api/maps/data-v2/upload
+[dataset]: creator-indoor-maps.md#datasets
+[Drawing error visualizer]: drawing-error-visualizer.md
+[Drawing package guide]: drawing-package-guide.md?pivots=drawing-package-v2
+[Drawing package requirements]: drawing-requirements.md
+[Dynamic maps StylesObject]: schema-stateset-stylesobject.md
+[Edit indoor maps using the QGIS plugin]: creator-qgis-plugin.md
+[Facility Ontology]: creator-facility-ontology.md
+[Features API]: /rest/api/maps/2023-03-01-preview/features
+[features]: glossary.md#feature
+[Implement Dynamic styling for indoor maps]: indoor-map-dynamic-styling.md
+[Indoor map concepts]: creator-indoor-maps.md
+[Indoor maps wayfinding service]: how-to-creator-wayfinding.md
+[Manage Creator]: how-to-manage-creator.md
+[Manifest file requirements]: drawing-requirements.md#manifest-file-requirements-1
+[manifest files]: drawing-requirements.md#manifest-file-1
+[manifest]: drawing-requirements.md#manifest-file-requirements
+[onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
+[Query datasets with WFS API]: how-to-creator-wfs.md
+[Routeset]: /rest/api/maps/2023-03-01-preview/routeset/create
+[tileset]: creator-indoor-maps.md#tilesets
+[Tilesets]: creator-indoor-maps.md#tilesets
+[Use Azure Maps Creator to create indoor maps]: tutorial-creator-indoor-maps.md
+[Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
+[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[Wayfinding service]: /rest/api/maps/2023-03-01-preview/wayfinding
+[Wayfinding]: creator-indoor-maps.md#wayfinding-preview
+[Work with datasets using the QGIS plugin]: creator-qgis-plugin.md
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md
For a fully functional sample that shows how the different styles affect how the
<!-- <br/>
-<iframe height="700" scrolling="no" title="Map style options" src="https://codepen.io/azuremaps/embed/eYNMjPb?height=700&theme-id=0&default-tab=result" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/eYNMjPb'>Map style options</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/eYNMjPb?height=700&theme-id=0&default-tab=result]
--> ## Set a base map style
var map = new atlas.Map('map', {
<!-- <br/>
-<iframe height='500' scrolling='no' title='Setting the style on map load' src='//codepen.io/azuremaps/embed/WKOQRq/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WKOQRq/'>Setting the style on map load</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WKOQRq/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ### Update the base map style
map.setStyle({ style: 'satellite' });
<!-- <br/>
-<iframe height='500' scrolling='no' title='Updating the style' src='//codepen.io/azuremaps/embed/yqXYzY/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yqXYzY/'>Updating the style</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/yqXYzY/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Add the style picker control
map.controls.add(new atlas.control.StyleControl({
<!-- <br/>
-<iframe height='500' scrolling='no' title='Adding the style picker' src='//codepen.io/azuremaps/embed/OwgyvG/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/OwgyvG/'>Adding the style picker</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/OwgyvG/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Next steps
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
For a complete working sample of how to implement displaying clusters using a bu
<!- <br/>
-<iframe height="500" scrolling="no" title="Basic bubble layer clustering" src="//codepen.io/azuremaps/embed/qvzRZY/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/qvzRZY/'>Basic bubble layer clustering</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/qvzRZY/?height=500&theme-id=0&default-tab=js,result&editable=true]
-> ## Display clusters using a symbol layer
For a complete working sample of how to implement displaying clusters using a sy
<!- <br/>
-<iframe height="500" scrolling="no" title="Clustered Symbol layer" src="//codepen.io/azuremaps/embed/Wmqpzz/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/Wmqpzz/'>Clustered Symbol layer</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/Wmqpzz/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> ## Clustering and the heat maps layer
For a complete working sample that demonstrates how to create a heat map that us
<!- <br/>
-<iframe height="500" scrolling="no" title="Cluster weighted Heat Map" src="//codepen.io/azuremaps/embed/VRJrgO/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/VRJrgO/'>Cluster weighted Heat Map</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/VRJrgO/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> ## Mouse events on clustered data points
function clusterClicked(e) {
<!- <br/>
-<iframe height="500" scrolling="no" title="Cluster getClusterExpansionZoom" src="//codepen.io/azuremaps/embed/moZWeV/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/moZWeV/'>Cluster getClusterExpansionZoom</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/moZWeV/?height=500&theme-id=0&default-tab=js,result&editable=true]
> ## Display cluster area
For a complete working sample that demonstrates how to do this, see [Display clu
<!- <br/>
- <iframe height="500" scrolling="no" title="Cluster area convex hull" src="//codepen.io/azuremaps/embed/QoXqWJ/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/QoXqWJ/'>Cluster area convex hull</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ > [!VIDEO //codepen.io/azuremaps/embed/QoXqWJ/?height=500&theme-id=0&default-tab=js,result&editable=true]
> ## Aggregating data in clusters
The [Cluster aggregates] sample uses an aggregate expression. The code calculate
:::image type="content" source="./media/cluster-point-data-web-sdk/cluster-aggregates.png" alt-text="Screenshot showing a map that uses clustering defined using data-driven style expression calculation. These calculations aggregate values across all points contained within the cluster."::: <!-
-<iframe height="500" scrolling="no" title="Cluster aggregates" src="//codepen.io/azuremaps/embed/jgYyRL/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/jgYyRL/'>Cluster aggregates</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/jgYyRL/?height=500&theme-id=0&default-tab=js,result&editable=true]
> ## Next steps
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
For a complete working sample of how to display data from a vector tile source o
<! <br/>
-<iframe height="500" scrolling="no" title="Vector tile line layer" src="https://codepen.io/azuremaps/embed/wvMXJYJ?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/wvMXJYJ'>Vector tile line layer</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/wvMXJYJ?height=500&theme-id=default&default-tab=js,result&editable=true]
>
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
To fix a **verticalPenetrationError** error, read about how to use a vertical pe
> [Creator for indoor mapping] [Conversion service]: /rest/api/maps/v2/conversion
-[Drawing package requirements]: drawing-requirements.md
+[Creator for indoor mapping]: creator-indoor-maps.md
[Drawing files requirements]: drawing-requirements.md#drawing-package-requirements
-[The JavaScript Object Notation (JSON) Data Interchange Format]: https://tools.ietf.org/html/rfc7159
-[manifest section in the Drawing package requirements]: drawing-requirements.md#manifest-file-requirements
-[How to use Azure Maps Drawing error visualizer]: drawing-error-visualizer.md
[Drawing Package Guide]: drawing-package-guide.md
-[Creator for indoor mapping]: creator-indoor-maps.md
+[Drawing package requirements]: drawing-requirements.md
+[How to use Azure Maps Drawing error visualizer]: drawing-error-visualizer.md
+[manifest section in the Drawing package requirements]: drawing-requirements.md#manifest-file-requirements
+[The JavaScript Object Notation (JSON) Data Interchange Format]: https://tools.ietf.org/html/rfc7159
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Defining text properties enables you to associate text entities that fall inside
:::image type="content" source="./media/creator-indoor-maps/onboarding-tool/dwg-layers.png" alt-text="Screenshot showing the 'create a new manifest' screen of the onboarding tool."::: > [!IMPORTANT]
-> The following feature class should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors:
+> The following feature classes should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors:
>
-> 1. Wall
-> 2. Stair
-> 3. Elevator
+> * Wall
+> * Stair
+> * Elevator
### Download
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
The following image shows a screenshot of the complete working sample that demon
<! <br/>
-<iframe height="500" scrolling="no" title="Drawing tools events" src="https://codepen.io/azuremaps/embed/dyPMRWo?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/dyPMRWo'>Drawing tools events</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/dyPMRWo?height=500&theme-id=default&default-tab=js,result&editable=true]
-->
Let's see some common scenarios that use the drawing tools events.
### Select points in polygon area
-This code demonstrates how to monitor an event of a user drawing shapes. For this example, the code monitors shapes of polygons, rectangles, and circles. Then, it determines which data points on the map are within the drawn area. The `drawingcomplete` event is used to trigger the select logic. In the select logic, the code loops through all the data points on the map. It checks if there's an intersection of the point and the area of the drawn shape. This example makes use of the open-source [Turf.js](https://turfjs.org/) library to perform a spatial intersection calculation.
+This code demonstrates how to monitor an event of a user drawing shapes. For this example, the code monitors shapes of polygons, rectangles, and circles. Then, it determines which data points on the map are within the drawn area. The `drawingcomplete` event is used to trigger the select logic. In the select logic, the code loops through all the data points on the map. It checks if there's an intersection of the point and the area of the drawn shape. This example makes use of the open-source [Turf.js] library to perform a spatial intersection calculation.
For a complete working sample of how to use the drawing tools to draw polygon areas on the map with points within them that can be selected, see [Select data in drawn polygon area] in the [Azure Maps Samples]. For the source code for this sample, see [Select data in drawn polygon area sample code].
For a complete working sample of how to use the drawing tools to draw polygon ar
<!- <br/>
-<iframe height="500" scrolling="no" title="Select data in drawn polygon area" src="https://codepen.io/azuremaps/embed/XWJdeja?height=500&theme-id=default&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/XWJdeja'>Select data in drawn polygon area</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/XWJdeja?height=500&theme-id=default&default-tab=result]
-> ### Draw and search in polygon area
For a complete working sample of how to use the drawing tools to search for poin
<!- <br/>
-<iframe height="500" scrolling="no" title="Draw and search in polygon area" src="https://codepen.io/azuremaps/embed/eYmZGNv?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/eYmZGNv'>Draw and search in polygon area</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/eYmZGNv?height=500&theme-id=default&default-tab=js,result&editable=true]
-> ### Create a measuring tool
For a complete working sample of how to use the drawing tools to measure distanc
:::image type="content" source="./media/drawing-tools-events/create-a-measuring-tool.png" alt-text="Screenshot showing a map displaying the measuring tool sample."::: <!-
-<iframe height="500" scrolling="no" title="Measuring tool" src="https://codepen.io/azuremaps/embed/RwNaZXe?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/RwNaZXe'>Measuring tool</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/RwNaZXe?height=500&theme-id=default&default-tab=js,result&editable=true]
-> ## Next steps
For a complete working sample of how to use the drawing tools to measure distanc
Learn how to use other features of the drawing tools module: > [!div class="nextstepaction"]
-> [Get shape data](map-get-shape-data.md)
+> [Get shape data]
> [!div class="nextstepaction"]
-> [Interaction types and keyboard shortcuts](drawing-tools-interactions-keyboard-shortcuts.md)
+> [Interaction types and keyboard shortcuts]
Learn more about the services module: > [!div class="nextstepaction"]
-> [Services module](how-to-use-services-module.md)
+> [Services module]
Check out more code samples: > [!div class="nextstepaction"]
-> [Code sample page](https://aka.ms/AzureMapsSamples)
+> [Code sample page]
[Azure Maps Samples]:https://samples.azuremaps.com
-[Drawing tools events]: https://samples.azuremaps.com/drawing-tools-module/drawing-tools-events
+[Code sample page]: https://aka.ms/AzureMapsSamples
+[Create a measuring tool sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Create%20a%20measuring%20tool/Create%20a%20measuring%20tool.html
+[Create a measuring tool]: https://samples.azuremaps.com/drawing-tools-module/create-a-measuring-tool
+[Draw and search polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Draw%20and%20search%20polygon%20area/Draw%20and%20search%20polygon%20area.html
+[Draw and search polygon area]: https://samples.azuremaps.com/drawing-tools-module/draw-and-search-polygon-area
[Drawing tools events sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20tools%20events/Drawing%20tools%20events.html
-[Select data in drawn polygon area]: https://samples.azuremaps.com/drawing-tools-module/select-data-in-drawn-polygon-area
+[Drawing tools events]: https://samples.azuremaps.com/drawing-tools-module/drawing-tools-events
+[Get shape data]: map-get-shape-data.md
+[Interaction types and keyboard shortcuts]: drawing-tools-interactions-keyboard-shortcuts.md
[Select data in drawn polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Select%20data%20in%20drawn%20polygon%20area/Select%20data%20in%20drawn%20polygon%20area.html
-[Draw and search polygon area]: https://samples.azuremaps.com/drawing-tools-module/draw-and-search-polygon-area
-[Draw and search polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Draw%20and%20search%20polygon%20area/Draw%20and%20search%20polygon%20area.html
-[Create a measuring tool]: https://samples.azuremaps.com/drawing-tools-module/create-a-measuring-tool
-[Create a measuring tool sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Create%20a%20measuring%20tool/Create%20a%20measuring%20tool.html
+[Select data in drawn polygon area]: https://samples.azuremaps.com/drawing-tools-module/select-data-in-drawn-polygon-area
+[Services module]: how-to-use-services-module.md
+[Turf.js]: https://turfjs.org
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
The ability to geocode in a country/region is dependent upon the road data cover
Learn more about Azure Maps geocoding: > [!div class="nextstepaction"]
-> [Azure Maps Search service](/rest/api/maps/search)
+> [Azure Maps Search service]
[Search service]: /rest/api/maps/search
-[Get Search Address API]: /rest/api/maps/search/getsearchaddress
+[Azure Maps Search service]: /rest/api/maps/search
+[Get Search Address]: /rest/api/maps/search/get-search-address
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
The following list describes common words used with the Azure Maps services.
<a name="zip-code"></a> **Zip code**: See [Postal code].
-<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map is often visible. But, the map shows limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map displays an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see [Zoom levels and tile grid].
+<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map is often visible. But, the map shows limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map displays an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see the [Zoom levels and tile grid] documentation.
-[Satellite imagery]: #satellite-imagery
-[Shared key authentication]: #shared-key-authentication
+[Altitude]: #altitude
[Azure Maps and Azure AD]: azure-maps-authentication.md
-[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Bearing]: #heading
[Bounding box]: #bounding-box
-[Parcel]: #parcel
[consumption model documentation]: consumption-model.md
+[EPSG:3857]: https://epsg.io/3857
[Extended geojson]: extend-geojson.md
-[Bearing]: #bearing
-[Reachable Range]: #reachable-range
-[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
-[Postal code]: #postal-code
[Isochrone]: #isochrone [Isodistance]: #isodistance
-[Transformation]: #transformation
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Parcel]: #parcel
+[Postal code]: #postal-code
[Queries Per Second (QPS)]: #queries-per-second-qps
-[EPSG:3857]: https://epsg.io/3857
+[Reachable Range]: #reachable-range
+[Satellite imagery]: #satellite-imagery
+[Shared key authentication]: #shared-key-authentication
[Spatial Data (SQL Server)]: /sql/relational-databases/spatial/spatial-data-sql-server [Tile layer]: #tile-layer
+[Transformation]: #transformation
[Traveling Salesmen Problem]: #traveling-salesmen-problem-tsp [Vehicle Routing Problem]: #vehicle-routing-problem-vrp
-[Altitude]: #altitude
[Web Mercator]: #web-mercator
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Now when you select that unit in the map, the pop-up menu has the new layer ID,
## Next steps > [!div class="nextstepaction"]
-> [Use the Azure Maps Indoor Maps module](how-to-use-indoor-module.md)
+> [Use the Azure Maps Indoor Maps module]
+[categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json
[Creator concepts]: creator-indoor-maps.md
-[tileset]: /rest/api/maps/v20220901preview/tileset
-[tileset get]: /rest/api/maps/v20220901preview/tileset/get
-[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
[Creators Rest API]: /rest/api/maps-creator/
+[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
+[manifest]: drawing-requirements.md#manifest-file-requirements
+[map configuration]: creator-indoor-maps.md#map-configuration
[style editor]: https://azure.github.io/Azure-Maps-Style-Editor [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[manifest]: drawing-requirements.md#manifest-file-requirements
+[tileset get]: /rest/api/maps/v20220901preview/tileset/get
+[tileset]: /rest/api/maps/v20220901preview/tileset
[unitProperties]: drawing-requirements.md#unitproperties
-[categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json
-[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
-[map configuration]: creator-indoor-maps.md#map-configuration
+[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
+[Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
The [data registry] service enables you to register data content in an Azure Sto
## Prerequisites -- [Azure Maps account]-- [Subscription key]-- An [Azure storage account][create storage account]
+- An [Azure Maps account]
+- A [Subscription key]
+- An [Azure storage account]
>[!IMPORTANT] >
-> - This article uses the `us.atlas.microsoft.com` geographical URL. If your account wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services](how-to-manage-creator.md#access-to-creator-services).
+> - This article uses the `us.atlas.microsoft.com` geographical URL. If your account wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services].
> - In the URL examples in this article you will need to replace: > - `{Azure-Maps-Subscription-key}` with your Azure Maps [subscription key].
-> - `{udid}` with the user data ID of your data registry. For more information, see [The user data ID](#the-user-data-id).
+> - `{udid}` with the user data ID of your data registry. For more information, see [The user data ID].
## Prepare to register data in Azure Maps
Before you can register data in Azure Maps, you need to create an environment co
### Create managed identities
-There are two types of managed identities: **system-assigned** and **user-assigned**. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. For more information, see [managed identities for Azure resources][managed identity].
+There are two types of managed identities: **system-assigned** and **user-assigned**. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. For more information, see [managed identities for Azure resources].
Use the following steps to create a managed identity, add it to your Azure Maps account.
The user defined managed identity should now be added to your Azure Maps account
-For more information, see [managed identities for Azure resources][managed identity].
+For more information, see [managed identities for Azure resources].
### Create a container and upload data files
To create a container in the [Azure portal], follow these steps:
Once you've created an Azure storage account with files uploaded into one or more containers, you're ready to create the datastore that links the storage accounts to your Azure Maps account. > [!IMPORTANT]
-> All storage accounts linked to an Azure Maps account must be in the same geographic location. For more information, see [Azure Maps service geographic scope][geographic scope].
+> All storage accounts linked to an Azure Maps account must be in the same geographic location. For more information, see [Azure Maps service geographic scope].
+ > [!NOTE]
-> If you do not have a storage account see [Create a storage account][create storage account].
+> If you do not have a storage account see [Create a storage account].
1. Select **Datastore** from the left menu in your Azure Maps account. 1. Select the **Add** button. An **Add datastore** screen appears on the right side.
To assign roles to your managed identities and associate them with a datastore:
With a datastore created in your Azure Maps account, you're ready to gather the properties required to create the data registry.
-There are the AzureBlob properties that you pass in the body of the HTTP request, and [The user data ID](#the-user-data-id) passed in the URL.
+There are the AzureBlob properties that you pass in the body of the HTTP request, and [The user data ID] passed in the URL.
### The AzureBlob
The `AzureBlob` is a JSON object that defines properties required to create the
|`linkedResource`| The ID of the datastore registered in the Azure Maps account.<BR>The datastore contains a link to the file being registered. | | `blobUrl` | A URL pointing to the Location of the AzurebBlob, the file imported into your container. |
-The following two sections provide you with details how to get the values to use for the [msiClientId](#the-msiclientid-property), [blobUrl](#the-bloburl-property) properties.
+The following two sections provide you with details how to get the values to use for the [msiClientId], [blobUrl] properties.
#### The msiClientId property
-The `msiClientId` property is the ID of the managed identity used to create the data registry. There are two types of managed identities: **system-assigned** and **user-assigned**. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. For more information, see [What are managed identities for Azure resources?][managed identity].
+The `msiClientId` property is the ID of the managed identity used to create the data registry. There are two types of managed identities: **system-assigned** and **user-assigned**. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. For more information, see [managed identities for Azure resources].
# [system-assigned](#tab/System-assigned)
The user data ID (`udid`) of the data registry is a user-defined GUID that must
``` > [!TIP]
-> The `udid` is a user-defined GUID that must be supplied when creating a data registry. If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio][Visual Studio]).
+> The `udid` is a user-defined GUID that must be supplied when creating a data registry. If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio]).
## Create a data registry
To create a data registry:
> [!NOTE] > When using System-assigned managed identities, you will get an error if you provide a value for the msiClientId property in your HTTP request.
- For more information on the properties required in the HTTP request body, see [Data registry properties](#data-registry-properties).
+ For more information on the properties required in the HTTP request body, see [Data registry properties].
1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**:
To create a data registry:
```
- For more information on the `udid` property, see [The user data ID](#the-user-data-id).
+ For more information on the `udid` property, see [The user data ID].
1. Copy the value of the **Operation-Location** key from the response header.
To create a data registry:
> [!NOTE] > When using User-assigned managed identities, you will get an error if you don't provide a value for the msiClientId property in your HTTP request.
- For more information on the properties required in the HTTP request body, see [Data registry properties](#data-registry-properties).
+ For more information on the properties required in the HTTP request body, see [Data registry properties].
1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**:
To create a data registry:
```
- For more information on the `udid` property, see [The user data ID](#the-user-data-id).
+ For more information on the `udid` property, see [The user data ID].
1. Copy the value of the **Operation-Location** key from the response header. > [!TIP]
-> If the contents of a previously registered file is modified, it will fail its [data validation](#data-validation) and won't be usable in Azure Maps until it's re-registered. To re-register a file, rerun the register request, passing in the same [AzureBlob](#the-azureblob) used to create the original registration.
-The value of the **Operation-Location** key is the status URL that you'll use to check the status of the data registry creation in the next section, it contains the operation ID used by the [Get operation][Get operation] API.
+> If the contents of a previously registered file is modified, it will fail its [data validation] and won't be usable in Azure Maps until it's re-registered. To re-register a file, rerun the register request, passing in the same [AzureBlob] used to create the original registration.
+The value of the **Operation-Location** key is the status URL that you'll use to check the status of the data registry creation in the next section, it contains the operation ID used by the [Get operation] API.
> [!NOTE] > The value of the **Operation-Location** key will not contain the `subscription-key`, you will need to add that to the request URL when using it to check the data registry creation status. ### Check the data registry creation status
-To (optionally) check the status of the data registry creation process, enter the status URL you copied in the [Create a data registry](#create-a-data-registry) section, and add your subscription key as a query string parameter. The request should look similar to the following URL:
+To (optionally) check the status of the data registry creation process, enter the status URL you copied in the [Create a data registry] section, and add your subscription key as a query string parameter. The request should look similar to the following URL:
```http https://us.atlas.microsoft.com/dataRegistries/operations/{udid}?api-version=2023-06-01&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
https://us.atlas.microsoft.com/dataRegistries/operations/{udid}?api-version=2023
## Get a list of all files in the data registry
-Use the [List][list] request to get a list of all files registered in an Azure Maps account:
+Use the [List] request to get a list of all files registered in an Azure Maps account:
```http https://us.atlas.microsoft.com/dataRegistries?api-version=2023-06-01&subscription-key={Azure-Maps-Subscription-key}
The data returned when running the list request is similar to the data provided
| property | description | |-|--|
-| contentMD5 | MD5 hash created from the contents of the file being registered. For more information, see [Data validation](#data-validation) |
+| contentMD5 | MD5 hash created from the contents of the file being registered. For more information, see [Data validation] |
| sizeInBytes | The size of the content in bytes. | ## Replace a data registry
-If you need to replace a previously registered file with another file, rerun the register request, passing in the same [AzureBlob](#the-azureblob) used to create the original registration, except for the [blobUrl](#the-bloburl-property). The `BlobUrl` needs to be modified to point to the new file.
+If you need to replace a previously registered file with another file, rerun the register request, passing in the same [AzureBlob] used to create the original registration, except for the [blobUrl]. The `BlobUrl` needs to be modified to point to the new file.
## Data validation When you register a file in Azure Maps using the data registry API, an MD5 hash is created from the contents of the file, encoding it into a 128-bit fingerprint and saving it in the `AzureBlob` as the `contentMD5` property. The MD5 hash stored in the `contentMD5` property is used to ensure the data integrity of the file. Since the MD5 hash algorithm always produces the same output given the same input, the data validation process can compare the `contentMD5` property of the file when it was registered against a hash of the file in the Azure storage account to check that it's intact and unmodified. If the hash isn't the same, the validation fails. If the file in the underlying storage account changes, the validation fails. If you need to modify the contents of a file that has been registered in Azure Maps, you need to register it again. <!- end-style links ->
+[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps service geographic scope]: geographic-scope.md
[Azure portal]: https://portal.azure.com/
-[create storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
-[geographic scope]: geographic-scope.md
-[managed identity]: /azure/active-directory/managed-identities-azure-resources/overview
-[storage account overview]: /azure/storage/common/storage-account-overview
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Visual Studio]: https://visualstudio.microsoft.com/downloads/
-<!- REST API Links >
+[Azure storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
+[AzureBlob]: #the-azureblob
+[blobUrl]: #the-bloburl-property
+[Create a data registry]: #create-a-data-registry
+[Create a storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
+[Data registry properties]: #data-registry-properties
[data registry]: /rest/api/maps/data-registry
+[Data validation]: #data-validation
[Get operation]: /rest/api/maps/data-registry/get-operation [list]: /rest/api/maps/data-registry/list
-[Register]: /rest/api/maps/data-registry/register-or-replace
+[managed identities for Azure resources]: /azure/active-directory/managed-identities-azure-resources/overview
+[msiClientId]: #the-msiclientid-property
+[storage account overview]: /azure/storage/common/storage-account-overview
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[The user data ID]: #the-user-data-id
+[Visual Studio]: https://visualstudio.microsoft.com/downloads/
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
The template used in this quickstart is from [Azure Quickstart Templates].
The Azure Maps account resource is defined in this template:
-* [**Microsoft.Maps/accounts**](/azure/templates/microsoft.maps/accounts): create an Azure Maps account.
+* [**Microsoft.Maps/accounts**]: create an Azure Maps account.
## Deploy the template
To learn more about Azure Maps and Azure Resource Manager, see the following art
* Create an Azure Maps [demo application] * Learn more about [ARM templates]
-[free account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
+[**Microsoft.Maps/accounts**]: /azure/templates/microsoft.maps/accounts
+[ARM templates]: ../azure-resource-manager/templates/overview.md
[Azure Quickstart Templates]: https://azure.microsoft.com/resources/templates/maps-create [demo application]: quick-demo-map-app.md
-[ARM templates]: ../azure-resource-manager/templates/overview.md
[Deploy templates]: ../azure-resource-manager/templates/deploy-powershell.md
+[free account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0], which can then be used to create a [dataset]. > [!NOTE]
-> This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps](#next-steps).
+> This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps].
## Prerequisites -- Basic understanding of [Creator for indoor maps](creator-indoor-maps.md).-- Basic understanding of [Facility Ontology 2.0].-- [Azure Maps account]
+- An [Azure Maps account]
+- A [Subscription key]
+- An Azure Maps [Creator resource]
- Basic understanding of [Creator for indoor maps] - Basic understanding of [Facility Ontology 2.0]-- An [Azure Maps account]-- An Azure Maps [Creator resource].-- A [Subscription key]. - Zip package containing all required GeoJSON files. If you don't have GeoJSON files, you can download the [Contoso building sample]. >[!IMPORTANT]
Azure Maps Creator enables users to import their indoor map data in GeoJSON form
## Create dataset using the GeoJSON package
-For more information on the GeoJSON package, see the [Geojson zip package requirements](#geojson-zip-package-requirements) section.
+For more information on the GeoJSON package, see the [Geojson zip package requirements] section.
### Upload the GeoJSON package
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&
A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset. > [!IMPORTANT]
-> This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted drawing package.
+> This is different from the previous version of the [Dataset Create] API in that it doesn't require a `conversionId` from a converted drawing package.
To create a dataset:
-1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section):
+1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status] section):
```http https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Subscription-key}
To create a dataset:
To check the status of the dataset creation process and retrieve the `datasetId`:
-1. Enter the status URL you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL:
+1. Enter the status URL you copied in [Create a dataset]. The request should look like the following URL:
```http https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
To check the status of the dataset creation process and retrieve the `datasetId`
> `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2022-09-01-preview`
-See [Next steps](#next-steps) for links to articles to help you complete your indoor map.
+See [Next steps] for links to articles to help you complete your indoor map.
## Add data to an existing dataset
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
### Facility ontology 2.0 validations in the Dataset
-[Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail in the following list.
+[Facility Ontology 2.0] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail in the following list.
- The maximum number of features that can be imported into a dataset at a time is 150,000. - The facility area can be between 4 and 4,000 Sq Km.
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
> [!div class="nextstepaction"] > [Create a tileset]
-<! learn.microsoft.com links >
[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services [area]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Check the GeoJSON package upload status]: #check-the-geojson-package-upload-status
+[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
+[Create a dataset]: #create-a-dataset
[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset [Creator for indoor maps]: creator-indoor-maps.md [Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md [Creator resource]: how-to-manage-creator.md
+[Data Upload API]: /rest/api/maps/data-v2/upload
+[Dataset Create API]: /rest/api/maps/v20220901preview/dataset/create
+[Dataset Create]: /rest/api/maps/v2/dataset/create
[dataset]: creator-indoor-maps.md#datasets [Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2 [facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility
+[Geojson zip package requirements]: #geojson-zip-package-requirements
[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level [line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement
+[Next steps]: #next-steps
[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening [point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement
+[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html
[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit [verticalPenetrations]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
-<! REST API Links >
-[Data Upload API]: /rest/api/maps/data-v2/upload
-[Dataset Create API]: /rest/api/maps/v20220901preview/dataset/create
-[Dataset Create]: /rest/api/maps/v2/dataset/create
-<! External Links >
-[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
-[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html
[Visual Studio]: https://visualstudio.microsoft.com/downloads/
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
The Azure Maps Java SDK can be integrated with Java applications and libraries t
## Prerequisites -- [Azure Maps account].-- [Subscription key] or other form of [authentication].
+- An [Azure Maps account]
+- A [Subscription key] or other form of [authentication]
- [Java Version 8] or above   - Maven (any version). For more information, see [Get started with Azure SDK and Apache Maven][maven].
The client object used to access the Azure Maps Search APIs require either an `A
### Using an Azure AD credential
-You can authenticate with Azure AD using the [Azure Identity library][Identity library]. To use the [DefaultAzureCredential] provider, you need to add the mvn dependency in the `pom.xml` file:
+You can authenticate with Azure AD using the [Azure Identity library]. To use the [DefaultAzureCredential] provider, you need to add the mvn dependency in the `pom.xml` file:
```xml <dependency>
You can authenticate with Azure AD using the [Azure Identity library][Identity l
</dependency> ```
-You need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. The Application (client) ID, a Directory (tenant) ID, and a client secret are returned. Copy these values and store them in a secure place. You need them in the following steps.
+You need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. The Application (client) ID, a Directory (tenant) ID, and a client secret are returned. Copy these values and store them in a secure place. You need them in the following steps.
Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resource's client ID as environment variables:
public class Demo{
[authentication]: azure-maps-authentication.md [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [defaultazurecredential]: /azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential
-[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
-[Identity library]: /java/api/overview/azure/identity-readme?source=recommendations&view=azure-java-stable
+[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
+[Azure Identity library]: /java/api/overview/azure/identity-readme?source=recommendations&view=azure-java-stable
[Java Standard Versions]: https://www.oracle.com/java/technologies/downloads/ [Java Version 8]: /azure/developer/java/fundamentals/?view=azure-java-stable [maven]: /azure/developer/java/sdk/get-started-maven
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
For the source code for this sample, see [Symbol layer with built-in icon templa
<!-- <br/>
-<iframe height="500" scrolling="no" title="Symbol layer with built-in icon template" src="//codepen.io/azuremaps/embed/VoQMPp/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/VoQMPp/'>Symbol layer with built-in icon template</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/VoQMPp/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> ## Use an image template along a lines path
The [Line layer with built-in icon template] demonstrates how to do this. As sho
<!-- <br/>
-<iframe height="500" scrolling="no" title="Line layer with built-in icon template" src="//codepen.io/azuremaps/embed/KOQvJe/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/KOQvJe/'>Line layer with built-in icon template</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/KOQvJe/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> > [!TIP]
The [Fill polygon with built-in icon template] sample demonstrates how to render
<!-- <br/>
-<iframe height="500" scrolling="no" title="Fill polygon with built-in icon template" src="//codepen.io/azuremaps/embed/WVMEmz/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/WVMEmz/'>Fill polygon with built-in icon template</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WVMEmz/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> > [!TIP]
The [HTML Marker with built-in icon template] sample demonstrates this using the
<!-- <br/>
-<iframe height="500" scrolling="no" title="HTML Marker with built-in icon template" src="//codepen.io/azuremaps/embed/EqQvzq/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/EqQvzq/'>HTML Marker with built-in icon template</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/EqQvzq/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> > [!TIP]
The [Add custom icon template to atlas namespace] sample demonstrates how to tak
<!-- <br/>
-<iframe height="500" scrolling="no" title="Add custom icon template to atlas namespace" src="//codepen.io/azuremaps/embed/NQyvEX/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/NQyvEX/'>Add custom icon template to atlas namespace</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/NQyvEX/?height=500&theme-id=0&default-tab=js,result&editable=true]
-> ## List of image templates
With the following tool, you can render the different built-in image templates i
<br/>
-<iframe height="500" scrolling="no" title="Icon template options" src="//codepen.io/azuremaps/embed/NQyaaO/?height=500&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/NQyaaO/'>Icon template options</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/NQyaaO/?height=500&theme-id=0&default-tab=result]
## Next steps
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
atlas.setDomain('atlas.azure.us');
Be sure to use Azure Maps authentication details from the Azure Government cloud platform when authenticating the map and services.
-The domain for the services needs to be set when creating an instance of an API URL endpoint, when using the services module. For example, the following code creates an instance of the `SearchURL` class and points the domain to the Azure Government cloud.
-
-```javascript
-var searchURL = new atlas.service.SearchURL(pipeline, 'atlas.azure.us');
-```
-
-If directly accessing the Azure Maps REST services, change the URL domain to `atlas.azure.us`. For example, if using the search API service, change the URL domain from `https://atlas.microsoft.com/search/` to `https://atlas.azure.us/search/`.
- ## JavaScript frameworks If developing using a JavaScript framework, one of the following open-source projects may be useful:
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
The following image is a screenshot showing the results of this sample code, a t
:::image type="content" source="./media/how-to-use-services-module/services-module-in-webpage.png"alt-text="A screenshot of an HTML table showing the address searched and the resulting coordinates."::: <!-
-<iframe height="500" scrolling="no" title="Using the Services Module" src="//codepen.io/azuremaps/embed/zbXGMR/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/zbXGMR/'>Using the Services Module</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/zbXGMR/?height=500&theme-id=0&default-tab=js,result&editable=true]
> ## Azure Government cloud support
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
The [Accessible popups] example loads points of interests on the map using a sym
<! <br/>
-<iframe height='500' scrolling='no' title='Make an accessible application' src='//codepen.io/azuremaps/embed/ZoVyZQ/?height=504&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZoVyZQ/'>Make an accessible application</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ZoVyZQ/?height=504&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
<br/> >
azure-maps Map Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md
map.events.add("load", function () {
<! <br/>
-<iframe height='500' scrolling='no' title='BubbleLayer DataSource' src='//codepen.io/azuremaps/embed/mzqaKB/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/mzqaKB/'>BubbleLayer DataSource</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/mzqaKB/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> ## Show labels with a bubble layer
This code shows you how to use a bubble layer to render a point on the map and a
<! <br/>
-<iframe height='500' scrolling='no' title='MultiLayer DataSource' src='//codepen.io/azuremaps/embed/rqbQXy/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/rqbQXy/'>MultiLayer DataSource</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/rqbQXy/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> ## Customize a bubble layer
The Bubble layer only has a few styling options. Use the [Bubble Layer Options]
<!- <br/>
-<iframe height='700' scrolling='no' title='Bubble Layer Options' src='//codepen.io/azuremaps/embed/eQxbGm/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/eQxbGm/'>Bubble Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/eQxbGm/?height=700&theme-id=0&default-tab=result]
> ## Next steps
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
map.controls.add(new atlas.control.ZoomControl(), {
<!- <br/>
-<iframe height='500' scrolling='no' title='Adding a zoom control' src='//codepen.io/azuremaps/embed/WKOQyN/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WKOQyN/'>Adding a zoom control</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WKOQyN/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Add pitch control
map.controls.add(new atlas.control.PitchControl(), {
<!- <br/>
-<iframe height='500' scrolling='no' title='Adding a pitch control' src='//codepen.io/azuremaps/embed/xJrwaP/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xJrwaP/'>Adding a pitch control</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/xJrwaP/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Add compass control
map.controls.add(new atlas.control.CompassControl(), {
<!- <br/>
-<iframe height='500' scrolling='no' title='Adding a rotate control' src='//codepen.io/azuremaps/embed/GBEoRb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/GBEoRb/'>Adding a rotate control</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/GBEoRb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## A Map with all controls
The following image shows a map with the zoom, compass, pitch, and style picker
<!- <br/>
-<iframe height='500' scrolling='no' title='A map with all the controls' src='//codepen.io/azuremaps/embed/qyjbOM/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/qyjbOM/'>A map with all the controls</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/qyjbOM/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> The style picker control is defined by the [StyleControl] class. For more information on using the style picker control, see [choose a map style].
The [Navigation Control Options] sample is a tool to test out the various option
<!- <br/>
-<iframe height="700" scrolling="no" title="Navigation control options" src="//codepen.io/azuremaps/embed/LwBZMx/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/LwBZMx/'>Navigation control options</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/LwBZMx/?height=700&theme-id=0&default-tab=result]
-> If you want to create customized navigation controls, create a class that extends from the `atlas.Control` class or create an HTML element and position it above the map div. Have this UI control call the maps `setCamera` function to move the map.
azure-maps Map Add Custom Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md
For a complete working sample of how to add an HTML marker, see [Simple HTML Mar
:::image type="content" source="./media/map-add-custom-html/simple-html-marker.png" alt-text="Screenshot showing a map of the world with a simple HtmlMarker."::: <!-
-<iframe height='500' scrolling='no' title='Add an HTML Marker to a map' src='//codepen.io/azuremaps/embed/MVoeVw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/MVoeVw/'>Add an HTML Marker to a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/MVoeVw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Create SVG templated HTML marker
For a complete working sample of how to create a custom SVG template and use it
:::image type="content" source="./media/map-add-custom-html/html-marker-with-custom-svg-template.png" alt-text="Screenshot showing a map of the world with a custom SVG template used with the HtmlMarker class. It includes a button labeled update marker options, that when selected changes the color and text options from the SVG template used in the HtmlMarker. "::: <!-
-<iframe height='500' scrolling='no' title='HTML Marker with Custom SVG Template' src='//codepen.io/azuremaps/embed/LXqMWx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/LXqMWx/'>HTML Marker with Custom SVG Template</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/LXqMWx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> > [!TIP]
For a complete working sample of how to use CSS and HTML to create a marker on t
:::image type="content" source="./media/map-add-custom-html/css-styled-html-marker.gif" alt-text="Screenshot showing a CSS styled HTML marker. "::: <!-
-<iframe height='500' scrolling='no' title='HTML DataSource' src='//codepen.io/azuremaps/embed/qJVgMx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/qJVgMx/'>HTML DataSource</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/qJVgMx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Draggable HTML markers
For a complete working sample of how to use CSS and HTML to create a marker on t
:::image type="content" source="./media/map-add-custom-html/draggable-html-marker.gif" alt-text="Screenshot showing a map of the United States with a yellow thumb tack being dragged to demonstrate a draggable HTML marker. "::: <!U-
-<iframe height='500' scrolling='no' title='Draggable HTML Marker' src='//codepen.io/azuremaps/embed/wQZoEV/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/wQZoEV/'>Draggable HTML Marker</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/wQZoEV/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Add mouse events to HTML markers
For a complete working sample of how to add mouse and drag events to an HTML mar
<!- <br/>
-<iframe height='500' scrolling='no' title='Adding Mouse Events to HTML Markers' src='//codepen.io/azuremaps/embed/RqOKRz/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/RqOKRz/'>Adding Mouse Events to HTML Markers</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/RqOKRz/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Next steps
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md
For a complete working sample that demonstrates how to add a drawing toolbar to
:::image type="content" source="./media/map-add-drawing-toolbar/add-drawing-toolbar.png" alt-text="Screenshot showing the drawing toolbar on a map."::: <!
-<iframe height="500" scrolling="no" title="Add drawing toolbar" src="//codepen.io/azuremaps/embed/ZEzLeRg/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/ZEzLeRg/'>Add drawing toolbar</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ZEzLeRg/?height=265&theme-id=0&default-tab=js,result&editable=true]
> ## Limit displayed toolbar options
The following screenshot shows a sample of an instance of the drawing manager th
:::image type="content" source="./media/map-add-drawing-toolbar/limit-displayed-toolbar-options.png" alt-text="Screenshot that demonstrates an instance of the drawing manager that displays the toolbar with just a polygon drawing tool on the map."::: <!
-<iframe height="500" scrolling="no" title="Add a polygon drawing tool" src="//codepen.io/azuremaps/embed/OJLWWMy/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/OJLWWMy/'>Add a polygon drawing tool</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/OJLWWMy/?height=265&theme-id=0&default-tab=js,result&editable=true]
> ## Change drawing rendering style
For a complete working sample that demonstrates how to customize the rendering o
:::image type="content" source="./media/map-add-drawing-toolbar/change-drawing-rendering-style.png" alt-text="Screenshot showing different drawing shaped rendered on a map."::: <!
-<iframe height="500" scrolling="no" title="Change drawing rendering style" src="//codepen.io/azuremaps/embed/OJLWpyj/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/OJLWpyj/'>Change drawing rendering style</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/OJLWpyj/?height=265&theme-id=0&default-tab=js,result&editable=true]
> > [!NOTE]
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
The [Simple Heat Map Layer] sample demonstrates how to create a simple heat map
:::image type="content" source="./media/map-add-heat-map-layer/add-a-heat-map-layer.png" alt-text="Screenshot showing a map displaying a heat map."::: <!
-<iframe height='500' scrolling='no' title='Simple Heat Map Layer' src='//codepen.io/azuremaps/embed/gQqdQB/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/gQqdQB/'>Simple Heat Map Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/gQqdQB/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> ## Customize the heat map layer
The [Heat Map Layer Options] sample shows how the different options of the heat
:::image type="content" source="./media/map-add-heat-map-layer/heat-map-layer-options.png" alt-text="Screenshot showing a map displaying a heat map, and a panel with editable settings that show how the different options of the heat map layer affect rendering."::: <!
-<iframe height='700' scrolling='no' title='Heat Map Layer Options' src='//codepen.io/azuremaps/embed/WYPaXr/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WYPaXr/'>Heat Map Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WYPaXr/?height=700&theme-id=0&default-tab=result]
> ## Consistent zoomable heat map
The [Consistent zoomable Heat Map] sample shows how to create a heat map where t
:::image type="content" source="./media/map-add-heat-map-layer/consistent-zoomable-heat-map.png" alt-text="Screenshot showing a map displaying a heat map that uses a zoom expression that scales the radius for each zoom level."::: <!
-<iframe height="500" scrolling="no" title="Consistent zoomable heat map" src="//codepen.io/azuremaps/embed/OGyMZr/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/OGyMZr/'>Consistent zoomable heat map</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/OGyMZr/?height=500&theme-id=0&default-tab=js,result&editable=true]
> The `zoom` expression can only be used in `step` and `interpolate` expressions. The following expression can be used to approximate a radius in meters. This expression uses a placeholder `radiusMeters`, which you should replace with your desired radius. This expression calculates the approximate pixel radius for a zoom level at the equator for zoom levels 0 and 24, and uses an `exponential interpolation` expression to scale between these values the same way the tiling system in the map works.
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
For a fully functional sample that shows how to overlay an image of a map of New
:::image type="content" source="./media/map-add-image-layer/simple-image-layer.png" alt-text="A screenshot showing a map with an image of a map of Newark New Jersey from 1922 as an Image layer."::: <!--
-<iframe height='500' scrolling='no' title='Simple Image Layer' src='//codepen.io/azuremaps/embed/eQodRo/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/eQodRo/'>Simple Image Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/eQodRo/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Import a KML file as ground overlay
For a fully functional sample that shows how to use a KML Ground Overlay as Imag
:::image type="content" source="./media/map-add-image-layer/kml-ground-overlay-as-image-layer.png" alt-text="A screenshot showing a map with a KML Ground Overlay appearing as Image Layer."::: <!--
-<iframe height='500' scrolling='no' title='KML Ground Overlay as Image Layer' src='//codepen.io/azuremaps/embed/EOJgpj/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/EOJgpj/'>KML Ground Overlay as Image Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/EOJgpj/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> > [!TIP]
The image layer has many styling options. For a fully functional sample that sho
:::image type="content" source="./media/map-add-image-layer/image-layer-options.png" alt-text="A screenshot showing a map with a panel that has the different options of the image layer that affect rendering. In this sample, you can change styling options and see the effect it has on the map."::: <!--
-<iframe height='700' scrolling='no' title='Image Layer Options' src='//codepen.io/azuremaps/embed/RqOGzx/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/RqOGzx/'>Image Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/RqOGzx/?height=700&theme-id=0&default-tab=result]
--> ## Next steps
azure-maps Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md
The following screenshot shows a sample of the above functionality.
:::image type="content" source="./media/map-add-line-layer/add-line-layer.png"alt-text="A screenshot showing a line layer on an Azure Maps map."::: <!--
-<iframe height='500' scrolling='no' title='Add a line to a map' src='//codepen.io/azuremaps/embed/qomaKv/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/qomaKv/'>Add a line to a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/qomaKv/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> Line layers can be styled using [LineLayerOptions] and [Use data-driven style expressions].
This code creates a map that appears as follows:
:::image type="content" source="./media/map-add-line-layer/add-symbols-along-a-line.png"alt-text="A screenshot showing a line layer on an Azure Maps map with arrow symbols along the line."::: <!--
-<iframe height="500" scrolling="no" title="Show arrow along line" src="//codepen.io/azuremaps/embed/drBJwX/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/drBJwX/'>Show arrow along line</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/drBJwX/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> > [!TIP]
For a fully functional sample that shows how to apply a stroke gradient to a lin
:::image type="content" source="./media/map-add-line-layer/line-with-stroke-gradient.png"alt-text="A screenshot showing a line with a stroke gradient on the map."::: <!--
-<iframe height="500" scrolling="no" title="Line with Stroke Gradient" src="//codepen.io/azuremaps/embed/wZwWJZ/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/wZwWJZ/'>Line with Stroke Gradient</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/wZwWJZ/?height=500&theme-id=0&default-tab=js,result&editable=true]
--> ## Customize a line layer
The Line layer has several styling options. For a fully functional sample that i
:::image type="content" source="./media/map-add-line-layer/line-layer-options.png"alt-text="A screenshot showing the Line Layer Options sample that shows how the different options of the line layer affect rendering."::: <!--
-<iframe height='700' scrolling='no' title='Line Layer Options' src='//codepen.io/azuremaps/embed/GwLrgb/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/GwLrgb/'>Line Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/GwLrgb/?height=700&theme-id=0&default-tab=result]
--> ## Next steps
azure-maps Map Add Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md
function InitMap()
:::image type="content" source="./media/map-add-pin/add-symbol-layer.png" alt-text="A screenshot of map with a pin added using the symbol layer."::: <!-
-<iframe height='500' scrolling='no' title='Switch pin location' src='//codepen.io/azuremaps/embed/ZqJjRP/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZqJjRP/'>Switch pin location</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ZqJjRP/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> > [!TIP]
function InitMap()
:::image type="content" source="./media/map-add-pin/add-custom-icon-to-symbol-layer.png" alt-text="A screenshot of map with a pin added using the symbol layer with a custom icon."::: <!-
-<iframe height='500' scrolling='no' title='Custom Symbol Image Icon' src='//codepen.io/azuremaps/embed/WYWRWZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WYWRWZ/'>Custom Symbol Image Icon</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WYWRWZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> > [!TIP]
The symbol layer has many styling options available. The [Symbol Layer Options]
:::image type="content" source="./media/map-add-pin/symbol-layer-options.png" alt-text="A screenshot of map with a panel on the left side of the map with the various symbol options that can be interactively set."::: <!-
-<iframe height='700' scrolling='no' title='Symbol Layer Options' src='//codepen.io/azuremaps/embed/PxVXje/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/PxVXje/'>Symbol Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/PxVXje/?height=700&theme-id=0&default-tab=result]
-> > [!TIP]
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
For a fully functional sample that shows how to create one popup and reuse it ra
:::image type="content" source="./media/map-add-popup/reusing-popup-with-multiple-pins.png"alt-text="A screenshot of map with three blue pins."::: <!--
-<iframe height='500' scrolling='no' title='Reusing Popup with Multiple Pins' src='//codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/rQbjvK/'>Reusing Popup with Multiple Pins</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> ## Customizing a popup
For a fully functional sample that shows how to customize the look of a popup, s
:::image type="content" source="./media/map-add-popup/customize-popup.png"alt-text="A screenshot of map with a custom popup in the center of the map with the caption 'hello world'."::: <!--
-<iframe height="500" scrolling="no" title="Customized Popup" src="//codepen.io/azuremaps/embed/ymKgdg/?height=500&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/ymKgdg/'>Customized Popup</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ymKgdg/?height=500&theme-id=0&default-tab=result]
--> ## Add popup templates to the map
function InitMap()
:::image type="content" source="./media/map-add-popup/points-without-defined-template.png"alt-text="A screenshot of map with six blue dots."::: <!--
-<iframe height='500' scrolling='no' title='PopupTemplates' src='//codepen.io/azuremaps/embed/dyovrzL/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/dyovrzL/'>PopupTemplates</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/dyovrzL/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> ## Reuse popup template
For a fully functional sample that shows hot to reuse a single popup template wi
:::image type="content" source="./media/map-add-popup/reuse-popup-template.png"alt-text="A screenshot of a map showing Seattle with three blue pins to demonstrating how to reuse popup templates."::: <!--
-<iframe height='500' scrolling='no' title='ReusePopupTemplate' src='//codepen.io/azuremaps/embed/WNvjxGw/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WNvjxGw/'>ReusePopupTemplate</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WNvjxGw/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> ## Popup events
For a fully functional sample that shows how to add events to popups, see [Popup
:::image type="content" source="./media/map-add-popup/popup-events.png" alt-text="A screenshot of a map of the world with a popup in the center and a list of events in the upper left that are highlighted when the user opens, closes, or drags the popup."::: <!--
-<iframe height="500" scrolling="no" title="Popup events" src="//codepen.io/azuremaps/embed/BXrpvB/?height=500&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/BXrpvB/'>Popup events</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/BXrpvB/?height=500&theme-id=0&default-tab=result]
--> ## Next steps
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
function InitMap()
:::image type="content" source="./media/map-add-shape/polygon-layer.png" alt-text="A screenshot of map of New York City demonstrating the polygon layer that is covering Central Park with fill Color set to red and fill Opacity set to 0.7."::: <!--
-<iframe height='500' scrolling='no' title='Add a polygon to a map ' src='//codepen.io/azuremaps/embed/yKbOvZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yKbOvZ/'>Add a polygon to a map </a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/yKbOvZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Use a polygon and line layer together
function InitMap()
:::image type="content" source="./media/map-add-shape/polygon-line-layer.png" alt-text="A screenshot of a map of New York City demonstrating a mostly transparent polygon layer covering all of Central Park, bordered with a red line."::: <!
-<iframe height='500' scrolling='no' title='Polygon and line layer to add polygon' src='//codepen.io/azuremaps/embed/aRyEPy/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/aRyEPy/'>Polygon and line layer to add polygon</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/aRyEPy/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> ## Fill a polygon with a pattern
For a fully functional sample that shows how to use an image template as a fill
:::image type="content" source="./media/map-add-shape/fill-polygon-with-built-in-icon-template.png" alt-text="A screenshot of a map of the world with red dots forming a triangle in the center of the map."::: <!
-<iframe height="500" scrolling="no" title="Polygon fill pattern" src="//codepen.io/azuremaps/embed/JzQpYX/?height=500&theme-id=0&default-tab=js,result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/JzQpYX/'>Polygon fill pattern</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/JzQpYX/?height=500&theme-id=0&default-tab=js,result]
> > [!TIP]
The Polygon layer only has a few styling options. See the [Polygon Layer Options
:::image type="content" source="./media/map-add-shape/polygon-layer-options.png" alt-text="A screenshot of the Polygon Layer Options tool."::: <!
-<iframe height='700' scrolling='no' title='LXvxpg' src='//codepen.io/azuremaps/embed/LXvxpg/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/LXvxpg/'>LXvxpg</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/LXvxpg/?height=700&theme-id=0&default-tab=result]
> <a id="addACircle"></a>
function InitMap()
:::image type="content" source="./media/map-add-shape/add-circle-to-map.png" alt-text="A screenshot of a map showing a partially transparent green circle in New York City. This demonstrates adding a circle to a map."::: <!
-<iframe height='500' scrolling='no' title='Add a circle to a map' src='//codepen.io/azuremaps/embed/PRmzJX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/PRmzJX/'>Add a circle to a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/PRmzJX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Make a geometry easy to update
The [Make a geometry easy to update] sample shows how to wrap a circle GeoJSON o
:::image type="content" source="./media/map-add-shape/easy-to-update-geometry.png" alt-text="A screenshot of a map showing a red circle in New York City with a slider bar titled Circle Radius and as you slide the bar to the right or left, the value of the radius changes and the circle size adjusts automatically on the map."::: <!
-<iframe height='500' scrolling='no' title='Update shape properties' src='//codepen.io/azuremaps/embed/ZqMeQY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZqMeQY/'>Update shape properties</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ZqMeQY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Next steps
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragge
:::image type="content" source="./media/map-add-snap-grid/use-snapping-grid.png"alt-text="A screenshot that shows the snap grid on map."::: <!--
-<iframe height="500" scrolling="no" title="Use a snapping grid" src="https://codepen.io/azuremaps/embed/rNmzvXO?default-tab=js%2Cresult" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href="https://codepen.io/azuremaps/pen/rNmzvXO">
- Use a snapping grid</a> by Azure Maps (<a href="https://codepen.io/azuremaps">@azuremaps</a>)
- on <a href="https://codepen.io">CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/rNmzvXO?default-tab=js%2Cresult]
> ## Snap grid options
The [Snap grid options] sample shows the different customization options availab
:::image type="content" source="./media/map-add-snap-grid/snap-grid-options.png"alt-text="A screenshot of map with snap grid enabled and an options panel on the left where you can set various options and see the results in the map."::: <!--
-<iframe height="700" scrolling="no" title="Snap grid options" src="https://codepen.io/azuremaps/embed/RwVZJry?default-tab=result" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href="https://codepen.io/azuremaps/pen/RwVZJry">
- Snap grid options</a> by Azure Maps (<a href="https://codepen.io/azuremaps">@azuremaps</a>)
- on <a href="https://codepen.io">CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/RwVZJry?default-tab=result]
> ## Next steps
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
For a fully functional sample that shows how to create a tile layer that points
:::image type="content" source="./media/map-add-tile-layer/tile-layer.png"alt-text="A screenshot of map with a tile layer that points to a set of tiles using the x, y, zoom tiling system. The source of this tile layer is the OpenSeaMap project."::: <!--
-<iframe height='500' scrolling='no' title='Tile Layer using X, Y, and Z' src='//codepen.io/azuremaps/embed/BGEQjG/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/BGEQjG/'>Tile Layer using X, Y, and Z</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/BGEQjG/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Add an OGC web-mapping service (WMS)
The following screenshot shows the [WMS Tile Layer] sample that overlays a web-m
:::image type="content" source="./media/map-add-tile-layer/wms-tile-layer.png"alt-text="A screenshot of a world map with a tile layer that points to a Web Mapping Service (WMS)."::: <!--
-<iframe height="500" scrolling="no" title="WMS Tile Layer" src="https://codepen.io/azuremaps/embed/BapjZqr?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/BapjZqr'>WMS Tile Layer</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/BapjZqr?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Add an OGC web-mapping tile service (WMTS)
The following screenshot shows the WMTS Tile Layer sample overlaying a web-mappi
:::image type="content" source="./media/map-add-tile-layer/wmts-tile-layer.png"alt-text="A screenshot of a map with a tile layer that points to a Web Mapping Tile Service (WMTS) overlay."::: <!--
-<iframe height="500" scrolling="no" title="WMTS tile layer" src="https://codepen.io/azuremaps/embed/BapjZVY?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/BapjZVY'>WMTS tile layer</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/BapjZVY?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Customize a tile layer
The tile layer class has many styling options. The [Tile Layer Options] sample i
:::image type="content" source="./media/map-add-tile-layer/tile-layer-options.png"alt-text="A screenshot of Tile Layer Options sample."::: <!--
-<iframe height='700' scrolling='no' title='Tile Layer Options' src='//codepen.io/azuremaps/embed/xQeRWX/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xQeRWX/'>Tile Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/xQeRWX/?height=700&theme-id=0&default-tab=result]
--> ## Next steps
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
You can also load multiple maps on the same page, for sample code that demonstra
:::image type="content" source="./media/map-create/multiple-maps.png"alt-text="A screenshot that shows the snap grid on map."::: <!-
-<iframe height="500" scrolling="no" title="Basic map load" src="//codepen.io/azuremaps/embed/rXdBXx/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/rXdBXx/'>Basic map load</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/rXdBXx/?height=500&theme-id=0&default-tab=js,result&editable=true]
-> > [!TIP] > You can use the same or different authentication and language settings when using multiple maps on the same page.
renderWorldCopies: false
``` <!-
-<iframe height="500" scrolling="no" title="renderWorldCopies = false" src="//codepen.io/azuremaps/embed/eqMYpZ/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/eqMYpZ/'>renderWorldCopies = false</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/eqMYpZ/?height=500&theme-id=0&default-tab=js,result&editable=true]
-> ## Map options
map.setCamera({
Map properties, such as center and zoom level, are part of the [CameraOptions] properties. <!
-<iframe height='500' scrolling='no' title='Create a map via CameraOptions' src='//codepen.io/azuremaps/embed/qxKBMN/?height=543&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/qxKBMN/'>Create a map via `CameraOptions` </a>by Azure Location Based Services (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/qxKBMN/?height=543&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> <a id="setCameraBoundsOptions"></a>
map.setCamera({
In the following code, a [Map object] is constructed via `new atlas.Map()`. Map properties such as `CameraBoundsOptions` can be defined via [setCamera] function of the Map class. Bounds and padding properties are set using `setCamera`. <!-
-<iframe height='500' scrolling='no' title='Create a map via CameraBoundsOptions' src='//codepen.io/azuremaps/embed/ZrRbPg/?height=543&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZrRbPg/'>Create a map via `CameraBoundsOptions` </a>by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ZrRbPg/?height=543&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ### Animate map view
In the following code, the first code block creates a map and sets the enter and
:::image type="content" source="./media/map-create/animate-maps.png"alt-text="A screenshot showing a map with a button labeled Animate Maps that when pressed, causes the map to zoom in or out."::: <!
-<iframe height='500' scrolling='no' title='Animate Map View' src='//codepen.io/azuremaps/embed/WayvbO/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WayvbO/'>Animate Map View</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WayvbO/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> ## Request transforms
azure-maps Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md
The [Map Events] sample highlights the name of the events that are firing as you
:::image type="content" source="./media/map-events/map-events.png"alt-text="A screenshot showing a map with a list of map events that are highlighted anytime your actions on the map trigger that event."::: <!--
-<iframe height='600' scrolling='no' title='Interacting with the map ΓÇô mouse events' src='//codepen.io/azuremaps/embed/bLZEWd/?height=600&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/bLZEWd/'>Interact with the map ΓÇô mouse events</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/bLZEWd/?height=600&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Interact with map layers
The [Layer Events] sample highlights the name of the events that are firing as y
:::image type="content" source="./media/map-events/layer-events.png"alt-text="A screenshot showing a map with a list of layer events that are highlighted anytime you interact with the Symbol Layer."::: <!--
-<iframe height='600' scrolling='no' title='Interacting with the map ΓÇô Layer Events' src='//codepen.io/azuremaps/embed/bQRRPE/?height=600&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/bQRRPE/'>Interacting with the map ΓÇô Layer Events</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/bQRRPE/?height=600&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Interact with HTML Marker
The [HTML marker layer events] sample highlights the name of the events that are
:::image type="content" source="./media/map-events/html-marker-layer-events.png"alt-text="A screenshot showing a map with a list of HTML marker layer events that are highlighted anytime your actions on the map trigger that event."::: <!--
-<iframe height='500' scrolling='no' title='Interacting with the map - HTML Marker events' src='//codepen.io/azuremaps/embed/VVzKJY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/VVzKJY/'>Interacting with the map - HTML Marker events</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/VVzKJY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> The following table lists all supported map class events.
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
function InitMap()
:::image type="content" source="./media/map-extruded-polygon/polygon-extrusion-layer.png"alt-text="A screenshot of a map showing New York City with a polygon extrusion layer covering central park with what looks like a rectangular red box. The maps angle is set to 45 degrees giving it a 3d appearance."::: <!
-<iframe height="500" scrolling="no" title="Extruded polygon" src="https://codepen.io/azuremaps/embed/wvvBpvE?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/wvvBpvE'>Extruded polygon</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/wvvBpvE?height=265&theme-id=0&default-tab=js,result&editable=true]
> ## Add data driven polygons
The [Create a Choropleth Map] sample shows an extruded choropleth map of the Uni
:::image type="content" source="./media/map-extruded-polygon/choropleth-map.png"alt-text="A screenshot of a map showing a choropleth map rendered using the polygon extrusion layer."::: <!
-<iframe height="500" scrolling="no" title="Extruded choropleth map" src="https://codepen.io/azuremaps/embed/eYYYNox?height=265&theme-id=0&default-tab=result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/eYYYNox'>Extruded choropleth map</a> by Azure Maps(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/eYYYNox?height=265&theme-id=0&default-tab=result&editable=true]
> ## Add a circle to the map
function InitMap()
:::image type="content" source="./media/map-extruded-polygon/add-circle-to-map.png"alt-text="A screenshot of a map showing a green circle."::: <!
-<iframe height="500" scrolling="no" title="Drone airspace polygon" src="https://codepen.io/azuremaps/embed/zYYYrxo?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/zYYYrxo'>Drone airspace polygon</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/zYYYrxo?height=265&theme-id=0&default-tab=js,result&editable=true]
> ## Customize a polygon extrusion layer
The Polygon Extrusion layer has several styling options. The [Polygon Extrusion
:::image type="content" source="./media/map-extruded-polygon/polygon-extrusion-layer-options.png"alt-text="A screenshot of the Azure Maps code sample that shows how the different options of the polygon extrusion layer affect rendering."::: <!
-<iframe height='700' scrolling='no' title='PoogBRJ' src='//codepen.io/azuremaps/embed/PoogBRJ/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/PoogBRJ/'>PoogBRJ</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a></iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/PoogBRJ/?height=700&theme-id=0&default-tab=result]
> ## Next steps
azure-maps Map Get Information From Coordinate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md
document.body.onload = onload;
``` <!--
-<iframe height='500' scrolling='no' title='Get information from a coordinate (Service Module)' src='//codepen.io/azuremaps/embed/ejEYMZ/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ejEYMZ/'>Get information from a coordinate (Service Module)</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ejEYMZ/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> In the previous code example, the first block constructs a map object and sets the authentication mechanism to use Azure Active Directory. For more information, see [Create a map].
document.body.onload = onload;
``` <!--
-<iframe height='500' scrolling='no' title='Get information from a coordinate' src='//codepen.io/azuremaps/embed/ddXzoB/?height=516&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ddXzoB/'>Get information from a coordinate</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ddXzoB/?height=516&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> In the previous code example, the first block of code constructs a map object and sets the authentication mechanism to use Azure Active Directory. You can see [Create a map] for instructions.
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
The [Get drawn shapes from drawing manager] code sample allows you to draw a sha
:::image type="content" source="./media/map-get-shape-data/get-data-from-drawn-shape.png" alt-text="A screenshot of a map with a circle drawn around Seattle. Next to the map is the code used to create the circle."::: <!--
-<iframe height="686" title="Get shape data" src="//codepen.io/azuremaps/embed/xxKgBVz/?height=265&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">See the Pen <a href='https://codepen.io/azuremaps/pen/xxKgBVz/'>Get shape data</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/xxKgBVz/?height=265&theme-id=0&default-tab=result]
-> ## Next steps
azure-maps Map Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-route.md
document.body.onload = onload;
``` <!--
-<iframe height='500' scrolling='no' title='Show directions from A to B on a map (Service Module)' src='//codepen.io/azuremaps/embed/RBZbep/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/RBZbep/'>Show directions from A to B on a map (Service Module)</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/RBZbep/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> In the previous code example, the first block constructs a map object and sets the authentication mechanism to use Azure Active Directory. You can see [Create a map] for instructions.
document.body.onload = onload;
``` <!--
-<iframe height='500' scrolling='no' title='Show directions from A to B on a map' src='//codepen.io/azuremaps/embed/zRyNmP/?height=469&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zRyNmP/'>Show directions from A to B on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/zRyNmP/?height=469&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> In the previous code example, the first block of code constructs a map object and sets the authentication mechanism to use Azure Active Directory. You can see [Create a map] for instructions.
azure-maps Map Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md
document.body.onload = onload;
``` <!--
-<iframe height='500' scrolling='no' title='Show search results on a map (Service Module)' src='//codepen.io/azuremaps/embed/zLdYEB/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zLdYEB/'>Show search results on a map (Service Module)</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/zLdYEB/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> In the previous code example, the first block constructs a map object and sets the authentication mechanism to use Azure Active Directory. For more information, see [Create a map].
document.body.onload = onload;
``` <!--
-<iframe height='500' scrolling='no' title='Show search results on a map' src='//codepen.io/azuremaps/embed/KQbaeM/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/KQbaeM/'>Show search results on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/KQbaeM/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> In the previous code example, the first block of code constructs a map object. It sets the authentication mechanism to use Azure Active Directory. For more information, see [Create a map].
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
The [Traffic Overlay] sample demonstrates how to display the traffic overlay on
:::image type="content" source="./media/map-show-traffic/traffic-overlay.png"alt-text="A screenshot of map with the traffic overlay, showing current traffic."::: <!--
-<iframe height='500' scrolling='no' title='Show traffic on a map' src='//codepen.io/azuremaps/embed/WMLRPw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WMLRPw/'>Show traffic on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/WMLRPw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Traffic overlay options
The [Traffic Overlay Options] tool lets you switch between the different traffic
:::image type="content" source="./media/map-show-traffic/traffic-overlay-options.png"alt-text="A screenshot of map showing the traffic overlay options."::: <!--
-<iframe height="700" scrolling="no" title="Traffic overlay options" src="//codepen.io/azuremaps/embed/RwbPqRY/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/RwbPqRY/'>Traffic overlay options</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/RwbPqRY/?height=700&theme-id=0&default-tab=result]
--> ## Add traffic controls
The [Traffic controls] sample is a fully functional map that shows how to displa
:::image type="content" source="./media/map-show-traffic/add-traffic-controls.png"alt-text="A screenshot of map with the traffic display button, showing current traffic."::: <!--
-<iframe height="500" scrolling="no" title="Traffic controls" src="https://codepen.io/azuremaps/embed/ZEWaeLJ?height500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/ZEWaeLJ'>Traffic controls</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/ZEWaeLJ?height500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> ## Next steps
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
- Dynamic pixel ratio fixed in underlying maplibre-gl dependency. -- Fixed an issue where `sortKey`, `radialOffset`, `variableAnchor` is not applied when used in `SymbolLayer` options.
+- Fixed an issue where `sortKey`, `radialOffset`, `variableAnchor` isn't applied when used in `SymbolLayer` options.
#### Installation (3.0.0-preview.10)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
+### [2.3.2] (August 11, 2023)
+
+#### Bug fixes (2.3.2)
+
+- Fixed an issue where accessibility-related duplicated DOM elements may result when `map.setServiceOptions` is called.
+
+- Fixed zoom control to take into account the `maxBounds` [CameraOptions].
+
+#### Other changes (2.3.2)
+
+- Added the `mvc` parameter to encompass the map control version in both definitions and style requests.
+ ### [2.3.1] (June 27, 2023) #### Bug fixes (2.3.1) -- fix `ImageSpriteManager` icon images may get removed during style change
+- Fix `ImageSpriteManager` icon images may get removed during style change
#### Other changes (2.3.1) -- security: insecure-randomness fix in UUID generation.
+- Security: insecure-randomness fix in UUID generation.
### [2.3.0] (June 2, 2023)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
#### Bug fixes (2.3.0) -- Fixed an exception that occurred while updating the property of a layout that that no longer exists.
+- Fixed an exception that occurred while updating the property of a layout that no longer exists.
- Fixed an issue where BubbleLayer's accessible indicators didn't update when the data source was modified.
Stay up to date on Azure Maps:
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.3.2]: https://www.npmjs.com/package/azure-maps-control/v/2.3.2
[2.3.1]: https://www.npmjs.com/package/azure-maps-control/v/2.3.1 [2.3.0]: https://www.npmjs.com/package/azure-maps-control/v/2.3.0 [2.2.7]: https://www.npmjs.com/package/azure-maps-control/v/2.2.7
Stay up to date on Azure Maps:
[adal-angular]: https://github.com/AzureAD/azure-activedirectory-library-for-js [@azure/msal-browser]: https://github.com/AzureAD/microsoft-authentication-library-for-js [migration guide]: ../active-directory/develop/msal-compare-msal-js-and-adal-js.md
+[CameraOptions]: /javascript/api/azure-maps-control/atlas.cameraoptions?view=azure-maps-typescript-latest
[CameraBoundsOptions]: /javascript/api/azure-maps-control/atlas.cameraboundsoptions?view=azure-maps-typescript-latest [Map.dispose()]: /javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#azure-maps-control-atlas-map-dispose [Map.setCamera(options)]: /javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#azure-maps-control-atlas-map-setcamera
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
The following image is an example of drawing mode of the `DrawingManager`. Selec
:::image type="content" source="./media/set-drawing-options/drawing-mode.gif"alt-text="A screenshot of a map showing central park in New York City where the drawing manager is demonstrated by drawing line."::: <!--
-<iframe height="500" scrolling="no" title="Draw a polygon" src="//codepen.io/azuremaps/embed/YzKVKRa/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/YzKVKRa/'>Draw a polygon</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/YzKVKRa/?height=265&theme-id=0&default-tab=js,result&editable=true]
--> ### Set the interaction type
drawingManager = new atlas.drawing.DrawingManager(map,{
<br/>
-<iframe height="500" scrolling="no" title="Free-hand drawing" src="//codepen.io/azuremaps/embed/ZEzKoaj/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/ZEzKoaj/'>Free-hand drawing</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ZEzKoaj/?height=265&theme-id=0&default-tab=js,result&editable=true]
> ### Customizing drawing options
The [Drawing manager options] can be used to test out customization of all optio
:::image type="content" source="./media/set-drawing-options/drawing-manager-options.png"alt-text="A screenshot of a map of Seattle with a panel on the left showing the drawing manager options that can be selected to see the effects they make to the map."::: <!
-<iframe height="685" title="Customize drawing manager" src="//codepen.io/azuremaps/embed/LYPyrxR/?height=600&theme-id=0&default-tab=result" frameborder="no" allowtransparency="true" allowfullscreen="true">See the Pen <a href='https://codepen.io/azuremaps/pen/LYPyrxR/'>Get shape data</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/LYPyrxR/?height=600&theme-id=0&default-tab=result]
> ### Put a shape into edit mode
azure-maps Spatial Io Add Ogc Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md
The [OGC map layer] sample shows how to overlay an OGC map layer on the map. For
:::image type="content" source="./media/spatial-io-add-ogc-map-layer/ogc-map-layer.png"alt-text="A screenshot that shows the snap grid on map."::: <!-
-<iframe height='700' scrolling='no' title='OGC Map layer example' src='//codepen.io/azuremaps/embed/xxGLZWB/?height=700&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xxGLZWB/'>OGC Map layer example</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/xxGLZWB/?height=700&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## OGC map layer options
The [OGC map layer options] sample demonstrates the different OGC map layer opti
:::image type="content" source="./media/spatial-io-add-ogc-map-layer/ogc-map-layer-options.png"alt-text="A screenshot that shows a map along with the OGC map layer options."::: <!-
-<iframe height='700' scrolling='no' title='OGC map layer options' src='//codepen.io/azuremaps/embed/abOyEVQ/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/abOyEVQ/'>OGC map layer options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/abOyEVQ/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
-> ## OGC Web Map Service explorer
The [OGC Web Map Service explorer] sample overlays imagery from the Web Map Serv
:::image type="content" source="./media/spatial-io-add-ogc-map-layer/ogc-web-map-service-explorer.png"alt-text="A screenshot that shows a map with a WMTS layer that comes from the world geology survey. Left of the map is a drop-down list showing the OGC services that can be selected."::: <!-
-<iframe height='750' scrolling='no' title='OGC Web Map Service explorer' src='//codepen.io/azuremaps/embed/YzXxYdX/?height=750&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/YzXxYdX/'>OGC Web Map Service explorer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/YzXxYdX/?height=750&theme-id=0&default-tab=result&embed-version=2&editable=true]
-> You may also specify the map settings to use a proxy service. The proxy service lets you load resources that are hosted on domains that don't have CORS enabled.
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
This sample code renders the point feature using the simple data layer, and appe
> &emsp; "coordinates": [0, 0] <!
-<iframe height="500" scrolling="no" title="Use the Simple data layer" src="//codepen.io/azuremaps/embed/zYGzpQV/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/zYGzpQV/'>Use the simple data layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/zYGzpQV/?height=500&theme-id=0&default-tab=js,result&editable=true]
> The real power of the simple data layer comes when:
For example when parsing XML data feeds, you may not know the exact styles and g
:::image type="content" source="./media/spatial-io-add-simple-data-layer/simple-data-layer-options.png"alt-text="A screenshot of map with a panel on the left showing the different simple data layer options."::: <!
-<iframe height="700" scrolling="no" title="Simple data layer options" src="//codepen.io/azuremaps/embed/gOpRXgy/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/gOpRXgy/'>Simple data layer options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/gOpRXgy/?height=700&theme-id=0&default-tab=result]
> > [!NOTE]
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
The [Simple WFS example] sample shows how to easily query a Web Feature Service
:::image type="content" source="./media/spatial-io-connect-wfs-service/simple-wfs-example.png"alt-text="A screenshot that shows the results of a WFS overlay on a map."::: <!--
-<iframe height='700' scrolling='no' title='Simple WFS example' src='//codepen.io/azuremaps/embed/MWwvVYY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/MWwvVYY/'>Simple WFS example</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/MWwvVYY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
-> ## Supported filters
The [WFS filter example] sample demonstrates the use of different filters with t
:::image type="content" source="./media/spatial-io-connect-wfs-service/wfs-filter-example.png"alt-text="A screenshot that shows The WFS filter sample that demonstrates the use of different filters with the WFS client."::: <!--
-<iframe height='500' scrolling='no' title= 'WFS filter examples' src='//codepen.io/azuremaps/embed/NWqvYrV/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/NWqvYrV/'>WFS filter examples</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/NWqvYrV/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> ## WFS service explorer
The [WFS service explorer] sample is a simple tool for exploring WFS services on
:::image type="content" source="./media/spatial-io-connect-wfs-service/wfs-service-explorer.png"alt-text="A screenshot that shows a simple tool for exploring WFS services on Azure Maps."::: <!--
-<iframe height='700' scrolling='no' title= 'WFS service explorer' src='//codepen.io/azuremaps/embed/bGdrvmG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/bGdrvmG/'>WFS service explorer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/bGdrvmG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> To access WFS services hosted on non-CORS enabled endpoints, a CORS enabled proxy service can be passed into the `proxyService` option of the WFS client as shown in the following example.
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
The [Load spatial data] sample shows how to read a spatial data set, and renders
:::image type="content" source="./media/spatial-io-read-write-spatial-data/load-spatial-data.png"alt-text="A screenshot that shows the snap grid on map."::: <!--
-<iframe height='500' scrolling='no' title='Load Spatial Data Simple' src='//codepen.io/azuremaps/embed/yLNXrZx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yLNXrZx/'>Load Spatial Data Simple</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/yLNXrZx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> The next code demo shows how to read and load KML, or KMZ, to the map. KML can contain ground overlays, which is in the form of an `ImageLyaer` or `OgcMapLayer`. These overlays must be added on the map separately from the features. Additionally, if the data set has custom icons, those icons need to be loaded to the maps resources before the features are loaded.
The [Load KML onto map] sample shows how to load KML or KMZ files onto the map.
:::image type="content" source="./media/spatial-io-read-write-spatial-data/load-kml-onto-map.png"alt-text="A screenshot that shows a map with a KML ground overlay."::: <!--
-<iframe height='500' scrolling='no' title='Load KML Onto Map' src='//codepen.io/azuremaps/embed/XWbgwxX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbgwxX/'>Load KML Onto Map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/XWbgwxX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. The read function tries to access files on another domain using CORS first. After the first time it fails to access any resource on another domain using CORS it only requests more files if a proxy service has been provided. The read function appends the file URL to the end of the proxy URL provided. This snippet of code shows how to pass a proxy service into the read function:
function InitMap()
:::image type="content" source="./media/spatial-io-read-write-spatial-data/read-delimited-file.png"alt-text="A screenshot that shows a map created from a CSV file."::: <!--
-<iframe height='500' scrolling='no' title='Add a Delimited File' src='//codepen.io/azuremaps/embed/ExjXBEb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ExjXBEb/'>Add a Delimited File</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/ExjXBEb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
> ## Write spatial data
The [Spatial data write options] sample is a tool that demonstrates most the wri
:::image type="content" source="./media/spatial-io-read-write-spatial-data/spatial-data-write-options.png"alt-text="A screenshot that shows The Spatial data write options sample that demonstrates most of the write options used with the atlas.io.write function."::: <!--
-<iframe height='700' scrolling='no' title='Spatial data write options' src='//codepen.io/azuremaps/embed/YzXxXPG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/YzXxXPG/'>Spatial data write options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/YzXxXPG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
> ## Example of writing spatial data
The [Drag and drop spatial files onto map] sample allows you to drag and drop on
:::image type="content" source="./media/spatial-io-read-write-spatial-data/drag-and-drop-spatial-files-onto-map.png" alt-text="A screenshot that shows a map with a panel to the left of the map that enables you to drag and drop one or more KML, KMZ, GeoRSS, GPX, GML, GeoJSON or CSV files onto the map."::: <!--
-<iframe height='700' scrolling='no' title='Drag and drop spatial files onto map' src='//codepen.io/azuremaps/embed/zYGdGoO/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zYGdGoO/'>Drag and drop spatial files onto map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/zYGdGoO/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
> You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. This snippet of code shows you could incorporate a proxy service:
The [Read Well Known Text] sample shows how to read the well-known text string `
:::image type="content" source="./media/spatial-io-read-write-spatial-data/read-well-known-text.png" alt-text="A screenshot that shows how to read Well Known Text (WKT) as GeoJSON and render it on a map using a bubble layer."::: <!--
-<iframe height='500' scrolling='no' title='Read Well-Known Text' src='//codepen.io/azuremaps/embed/XWbabLd/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbabLd/'>Read Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/XWbabLd/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> The [Read and write Well Known Text] sample demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON. For the source code of this sample, see [Read and write Well Known Text source code].
The [Read and write Well Known Text] sample demonstrates how to read and write W
:::image type="content" source="./media/spatial-io-read-write-spatial-data/read-and-write-well-known-text.png" alt-text="A screenshot showing the sample that demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON."::: <!--
-<iframe height='700' scrolling='no' title='Read and write Well-Known Text' src='//codepen.io/azuremaps/embed/JjdyYav/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/JjdyYav/'>Read and write Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/JjdyYav/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
--> ## Read and write GML
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
You might want to target older browsers that don't support WebGL or that have on
The [Render Azure Maps in Leaflet] Azure Maps sample shows how to render Azure Maps Raster Tiles in the Leaflet JS map control. This sample uses the open source [Azure Maps Leaflet plugin]. For the source code for this sample, see [Render Azure Maps in Leaflet sample source code]. <!-
-<iframe height="500" scrolling="no" title="Azure Maps + Leaflet" src="//codepen.io/azuremaps/embed/GeLgyx/?height=500&theme-id=0&default-tab=html,result" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/GeLgyx/'>Azure Maps + Leaflet</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/GeLgyx/?height=500&theme-id=0&default-tab=html,result]
-> For more code samples using Azure Maps in Leaflet, see [Azure Maps Samples].
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
If the map isn't needed right away, lazy load the Azure Maps Web SDK until it's
The [Lazy Load the Map] code sample shows how to delay the loading the Azure Maps Web SDK until a button is pressed. For the source code, see [Lazy Load the Map sample code]. <!
-<iframe height="500" scrolling="no" title="Lazy load the map" src="https://codepen.io/azuremaps/embed/vYEeyOv?height=500&theme-id=default&default-tab=js,result" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/vYEeyOv'>Lazy load the map</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/vYEeyOv?height=500&theme-id=default&default-tab=js,result]
> ### Add a placeholder for the map
The [Reusing Popup with Multiple Pins] code sample shows how to create a single
:::image type="content" source="./media/web-sdk-best-practices/reusing-popup-with-multiple-pins.png" alt-text="A screenshot of a map of Seattle with three blue pins, demonstrating how to Reuse Popups with Multiple Pins."::: <!
-<iframe height='500' scrolling='no' title='Reusing Popup with Multiple Pins' src='//codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/rQbjvK/'>Reusing Popup with Multiple Pins</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO //codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
--> That said, if you only have a few points to render on the map, the simplicity of HTML markers may be preferred. Additionally, HTML markers can easily be made draggable if needed.
The [Simple Symbol Animation] code sample demonstrates a simple way to animate a
:::image type="content" source="./media/web-sdk-best-practices/simple-symbol-animation.gif" alt-text="A screenshot of a map of the world with a symbol going in a circle, demonstrating how to animate the position of a symbol on the map by updating the coordinates."::: <!-
-<iframe height="500" scrolling="no" title="Symbol layer animation" src="https://codepen.io/azuremaps/embed/oNgGzRd?height=500&theme-id=default&default-tab=js,result" frameborder="no" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/oNgGzRd'>Symbol layer animation</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+> [!VIDEO https://codepen.io/azuremaps/embed/oNgGzRd?height=500&theme-id=default&default-tab=js,result]
-> ### Specify zoom level range
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
- `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com) (If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) 6. A data collection rule you want to associate with the devices. If it doesn't exist already, [create a data collection rule](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule). **Do not associate the rule to any resources yet**.
+7. Before using any PowerShell cmdlet, ensure cmdlet related PowerShell module is installed and imported.
## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
azure-monitor Solution Agenthealth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/solution-agenthealth.md
+
+ Title: Agent Health solution in Azure Monitor | Microsoft Docs
+description: Learn how to use this solution to monitor the health of your agents reporting directly to Log Analytics or System Center Operations Manager.
+ Last updated : 08/09/2023++++
+# Agent Health solution in Azure Monitor
+The Agent Health solution in Azure helps you understand which monitoring agents are unresponsive and submitting operational data. That includes all the agents that report directly to the Log Analytics workspace in Azure Monitor or to a System Center Operations Manager management group connected to Azure Monitor.
+
+You can also use the Agent Health solution to:
+
+* Keep track of how many agents are deployed and where they're distributed geographically.
+* Perform other queries to maintain awareness of the distribution of agents deployed in Azure, in other cloud environments, or on-premises.
+
+> [!IMPORTANT]
+> The Agent Health solution only monitors the health of the [Log Analytics agent](log-analytics-agent.md) which is on a deprecation path. This solution doesn't monitor the health of the [Azure Monitor agent](agents-overview.md).
+
+## Prerequisites
+Before you deploy this solution, confirm that you have supported [Windows agents](../agents/agent-windows.md) reporting to the Log Analytics workspace or reporting to an [Operations Manager management group](agents-overview.md) integrated with your workspace.
+
+## Management packs
+If your Operations Manager management group is connected to a Log Analytics workspace, the following management packs are installed in Operations Manager. These management packs are also installed on directly connected Windows computers after you add this solution:
+
+* Microsoft System Center Advisor HealthAssessment Direct Channel Intelligence Pack (Microsoft.IntelligencePacks.HealthAssessmentDirect)
+* Microsoft System Center Advisor HealthAssessment Server Channel Intelligence Pack (Microsoft.IntelligencePacks.HealthAssessmentViaServer)
+
+There's nothing to configure or manage with these management packs. For more information on how solution management packs are updated, see [Connect Operations Manager to Log Analytics](../agents/om-agents.md).
+
+## Configuration
+Add the Agent Health solution to your Log Analytics workspace by using the process described in [Add solutions](../insights/solutions.md). No further configuration is required.
+
+## Supported agents
+The following table describes the connected sources that this solution supports.
+
+| Connected source | Supported | Description |
+| | | |
+| Windows agents | Yes | Heartbeat events are collected from direct Windows agents.|
+| System Center Operations Manager management group | Yes | Heartbeat events are collected from agents that report to the management group every 60 seconds and are then forwarded to Azure Monitor. A direct connection from Operations Manager agents to Azure Monitor isn't required. Heartbeat event data is forwarded from the management group to the Log Analytics workspace.|
+
+## Use the solution
+When you add the solution to your Log Analytics workspace, the **Agent Health** tile is added to your dashboard. This tile shows the total number of agents and the number of unresponsive agents in the last 24 hours.
++
+Select the **Agent Health** tile to open the **Agent Health** dashboard. The dashboard includes the columns in the following table. Each column lists the top 10 events by count that match that column's criteria for the specified time range. You can run a log search that provides the entire list. Select **See all** beneath each column or select the column heading.
+
+| Column | Description |
+|--|-|
+| Agent count over time | A trend of your agent count over a period of seven days for both Linux and Windows agents|
+| Count of unresponsive agents | A list of agents that haven't sent a heartbeat in the past 24 hours|
+| Distribution by OS type | A partition of how many Windows and Linux agents you have in your environment|
+| Distribution by agent version | A partition of the agent versions installed in your environment and a count of each one|
+| Distribution by agent category | A partition of the categories of agents that are sending up heartbeat events: direct agents, Operations Manager agents, or the Operations Manager management server|
+| Distribution by management group | A partition of the Operations Manager management groups in your environment|
+| Geo-location of agents | A partition of the countries/regions where you have agents, and a total count of the number of agents that have been installed in each country/region|
+| Count of gateways installed | The number of servers that have the Log Analytics gateway installed, and a list of these servers|
++
+## Azure Monitor log records
+The solution creates one type of record in the Log Analytics workspace: heartbeat. Heartbeat records have the properties listed in the following table.
+
+| Property | Description |
+| | |
+| `Type` | `Heartbeat`|
+| `Category` | `Direct Agent`, `SCOM Agent`, or `SCOM Management Server`|
+| `Computer` | Computer name|
+| `OSType` | Windows or Linux operating system|
+| `OSMajorVersion` | Operating system major version|
+| `OSMinorVersion` | Operating system minor version|
+| `Version` | Log Analytics agent or Operations Manager agent version|
+| `SCAgentChannel` | `Direct` and/or `SCManagementServer`|
+| `IsGatewayInstalled` | `true` if the Log Analytics gateway is installed; otherwise `false`|
+| `ComputerIP` | Public IP address for an Azure virtual machine, if one is available; Azure SNAT address (not the private IP address) for a virtual machine that uses a private IP |
+| `ComputerPrivateIPs` | List of private IPs of the computer |
+| `RemoteIPCountry` | Geographic location where the computer is deployed|
+| `ManagementGroupName` | Name of the Operations Manager management group|
+| `SourceComputerId` | Unique ID of the computer|
+| `RemoteIPLongitude` | Longitude of the computer's geographic location|
+| `RemoteIPLatitude` | Latitude of the computer's geographic location|
+
+Each agent that reports to an Operations Manager management server will send two heartbeats. The `SCAgentChannel` property's value will include both `Direct` and `SCManagementServer`, depending on what data sources and monitoring solutions you've enabled in your subscription.
+
+If you recall, data from solutions is sent either:
+
+* Directly from an Operations Manager management server to Azure Monitor.
+* Directly from the agent to Azure Monitor, because of the volume of data collected on the agent.
+
+For heartbeat events that have the value `SCManagementServer`, the `ComputerIP` value is the IP address of the management server because it actually uploads the data. For heartbeats where `SCAgentChannel` is set to `Direct`, it's the public IP address of the agent.
+
+## Sample log searches
+The following table provides sample log searches for records that the solution collects.
+
+| Query | Description |
+|:|:|
+| Heartbeat &#124; distinct Computer |Total number of agents |
+| Heartbeat &#124; summarize LastCall = max(TimeGenerated) by Computer &#124; where LastCall < ago(24h) |Count of unresponsive agents in the last 24 hours |
+| Heartbeat &#124; summarize LastCall = max(TimeGenerated) by Computer &#124; where LastCall < ago(15m) |Count of unresponsive agents in the last 15 minutes |
+| Heartbeat &#124; where TimeGenerated > ago(24h) and Computer in ((Heartbeat &#124; where TimeGenerated > ago(24h) &#124; distinct Computer)) &#124; summarize LastCall = max(TimeGenerated) by Computer |Computers online in the last 24 hours |
+| Heartbeat &#124; where TimeGenerated > ago(24h) and Computer !in ((Heartbeat &#124; where TimeGenerated > ago(30m) &#124; distinct Computer)) &#124; summarize LastCall = max(TimeGenerated) by Computer |Total agents offline in the last 30 minutes (for the last 24 hours) |
+| Heartbeat &#124; summarize AggregatedValue = dcount(Computer) by OSType |Trend of the number of agents over time by OS type|
+| Heartbeat &#124; summarize AggregatedValue = dcount(Computer) by OSType |Distribution by OS type |
+| Heartbeat &#124; summarize AggregatedValue = dcount(Computer) by Version |Distribution by agent version |
+| Heartbeat &#124; summarize AggregatedValue = count() by Category |Distribution by agent category |
+| Heartbeat &#124; summarize AggregatedValue = dcount(Computer) by ManagementGroupName | Distribution by management group |
+| Heartbeat &#124; summarize AggregatedValue = dcount(Computer) by RemoteIPCountry |Geo-location of agents |
+| Heartbeat &#124; where iff(isnotnull(toint(IsGatewayInstalled)), IsGatewayInstalled == true, IsGatewayInstalled == "true") == true &#124; distinct Computer |Number of Log Analytics gateways installed |
+
+## Next steps
+
+Learn about [generating alerts from log queries in Azure Monitor](../alerts/alerts-overview.md).
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/vmext-troubleshoot.md
+
+ Title: Troubleshoot the Azure Log Analytics VM extension
+description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics VM extension for Windows and Linux Azure VMs.
+ Last updated : 06/06/2019+++
+# Troubleshoot the Log Analytics VM extension in Azure Monitor
+This article provides help troubleshooting errors you might experience with the Log Analytics VM extension for Windows and Linux virtual machines running on Azure. The article suggests possible solutions to resolve them.
+
+To verify the status of the extension:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the portal, select **All services**. In the list of resources, enter **virtual machines**. As you begin typing, the list filters based on your input. Select **Virtual machines**.
+1. In your list of virtual machines, find and select it.
+1. On the virtual machine, select **Extensions**.
+1. From the list, check to see if the Log Analytics extension is enabled or not. For Linux, the agent is listed as **OMSAgentforLinux**. For Windows, the agent is listed as **MicrosoftMonitoringAgent**.
+
+ ![Screenshot that shows the VM Extensions view.](./media/vmext-troubleshoot/log-analytics-vmview-extensions.png)
+
+1. Select the extension to view details.
+
+ ![Screenshot that shows the VM extension details.](./media/vmext-troubleshoot/log-analytics-vmview-extensiondetails.png)
+
+## Troubleshoot the Azure Windows VM extension
+
+If the Microsoft Monitoring Agent VM extension isn't installing or reporting, perform the following steps to troubleshoot the issue:
+
+1. Check if the Azure VM agent is installed and working correctly by using the steps in [KB 2965986](https://support.microsoft.com/kb/2965986#mt1):
+ * You can also review the VM agent log file `C:\WindowsAzure\logs\WaAppAgent.log`.
+ * If the log doesn't exist, the VM agent isn't installed.
+ * [Install the Azure VM Agent](../../virtual-machines/extensions/agent-windows.md#install-the-azure-windows-vm-agent).
+1. Review the Microsoft Monitoring Agent VM extension log files in `C:\Packages\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent`.
+1. Ensure the virtual machine can run PowerShell scripts.
+1. Ensure permissions on C:\Windows\temp haven't been changed.
+1. View the status of the Microsoft Monitoring Agent by entering `(New-Object -ComObject 'AgentConfigManager.MgmtSvcCfg').GetCloudWorkspaces() | Format-List` in an elevated PowerShell window on the virtual machine.
+1. Review the Microsoft Monitoring Agent setup log files in `C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\1.0.18053.0\`. This path changes based on the version number of the agent.
+
+For more information, see [Troubleshooting Windows extensions](../../virtual-machines/extensions/oms-windows.md).
+
+## Troubleshoot the Linux VM extension
+If the Log Analytics agent for Linux VM extension isn't installing or reporting, perform the following steps to troubleshoot the issue:
+
+1. If the extension status is **Unknown**, check if the Azure VM agent is installed and working correctly by reviewing the VM agent log file `/var/log/waagent.log`.
+ * If the log doesn't exist, the VM agent isn't installed.
+ * [Install the Azure VM Agent on Linux VMs](../../virtual-machines/extensions/agent-linux.md#installation).
+1. For other unhealthy statuses, review the Log Analytics agent for Linux VM extension logs files in `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/extension.log` and `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/CommandExecution.log`.
+1. If the extension status is healthy but data isn't being uploaded, review the Log Analytics agent for Linux log files in `/var/opt/microsoft/omsagent/log/omsagent.log`.
+
+## Next steps
+
+For more troubleshooting guidance related to the Log Analytics agent for Linux, see [Troubleshoot issues with the Log Analytics agent for Linux](../agents/agent-linux-troubleshoot.md).
azure-monitor Alerts Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-automatic-migration.md
Last updated 06/20/2023
As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
-[A migration tool](alerts-using-migration-tool.md) is available in the Azure portal for customers to trigger migration themselves. This article explains the automatic migration process in public cloud, that will start after 31 May 2021. It also details issues and solutions you might run into.
+A migration tool is available in the Azure portal for customers to trigger migration themselves. This article explains the automatic migration process in public cloud, that will start after 31 May 2021. It also details issues and solutions you might run into.
## Important things to note
When the automatic migration process fails, subscription owners will receive an
## Next steps - [Prepare for the migration](alerts-prepare-migration.md)-- [Understand how the migration tool works](alerts-understand-migration.md)
+- [Understand how the migration tool works](alerts-understand-migration.md)
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
If you're using a partner integration that's not listed here, confirm with the p
## Next steps -- [How to use the migration tool](alerts-using-migration-tool.md) - [Understand how the migration tool works](alerts-understand-migration.md)
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
Any user who has the built-in role of Monitoring Contributor at the subscription
## Common problems and remedies
-After you [trigger the migration](alerts-using-migration-tool.md), you'll receive email at the addresses you provided to notify you that migration is complete or if any action is needed from you. This section describes some common problems and how to deal with them.
+After you trigger the migration, you'll receive email at the addresses you provided to notify you that migration is complete or if any action is needed from you. This section describes some common problems and how to deal with them.
### Validation failed
As part of the migration, new metric alerts and new action groups will be create
## Next steps -- [How to use the migration tool](alerts-using-migration-tool.md) - [Prepare for the migration](alerts-prepare-migration.md)
azure-monitor Azure Cli Metrics Alert Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/azure-cli-metrics-alert-sample.md
+
+ Title: Create metric alert monitors in Azure CLI
+description: Learn how to create metric alerts in Azure Monitor with Azure CLI commands. These samples create alerts for a virtual machine and an App Service Plan.
+ Last updated : 04/05/2022++++
+# Create metric alert monitors in Azure CLI
+
+These samples create metric alert monitors in Azure Monitor by using Azure CLI commands. The first sample creates an alert for a virtual machine. The second command creates an alert that includes a dimension for an App Service Plan.
++
+## Create an alert
+
+This alert monitors an existing virtual machine named `VM07` in the resource group named `ContosoVMRG`.
+
+You can create a resource group by using the [az group create](/cli/azure/group#az-group-create) command. For information about creating virtual machines, see [Create a Windows virtual machine with the Azure CLI](../../virtual-machines/windows/quick-create-cli.md), [Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md), and the [az vm create](/cli/azure/vm#az-vm-create) command.
+
+```azurecli
+# resource group name: ContosoVMRG
+# virtual machine name: VM07
+
+# Create scope
+scope=$(az vm show --resource-group ContosoVMRG --name VM07 --output tsv --query id)
+
+# Create action
+action=$(az monitor action-group create --name ContosoWebhookAction \
+ --resource-group ContosoVMRG --output tsv --query id \
+ --action webhook https://alerts.contoso.com usecommonalertschema)
+
+# Create condition
+condition=$(az monitor metrics alert condition create --aggregation Average \
+ --metric "Percentage CPU" --op GreaterThan --type static --threshold 90 --output tsv)
+
+# Create metrics alert
+az monitor metrics alert create --name alert-01 --resource-group ContosoVMRG \
+ --scopes $scope --action $action --condition $condition --description "Test High CPU"
+```
+
+This sample uses the `tsv` output type, which doesn't include unwanted symbols such as quotation marks. For more information, see [Use Azure CLI effectively](/cli/azure/use-cli-effectively).
+
+## Create an alert with a dimension
+
+This sample creates an App Service Plan and then creates a metrics alert for it. The example uses a dimension to specify that all instances of the App Service Plan will fall under this metric. The sample creates a resource group and application service plan.
+
+```azurecli
+# Create resource group
+az group create --name ContosoRG --location eastus2
+
+# Create application service plan
+az appservice plan create --resource-group ContosoRG --name ContosoAppServicePlan \
+ --is-linux --number-of-workers 4 --sku S1
+
+# Create scope
+scope=$(az appservice plan show --resource-group ContosoRG --name ContosoAppServicePlan \
+ --output tsv --query id)
+
+# Create dimension
+dim01=$(az monitor metrics alert dimension create --name Instance --value * --op Include --output tsv)
+
+# Create condition
+condition=$(az monitor metrics alert condition create --aggregation Average \
+ --metric CpuPercentage --op GreaterThan --type static --threshold 90 \
+ --dimension $dim01 --output tsv)
+```
+
+To see a list of the possible metrics, run the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command. The `--output` parameter displays the values in a readable format.
++
+```azurecli
+az monitor metrics list-definitions --resource $scope --output table
+
+# Create metrics alert
+az monitor metrics alert create --name alert-02 --resource-group ContosoRG \
+ --scopes $scope --condition $condition --description "Service Plan High CPU"
+```
+
+## Clean up deployment
+
+If you created resource groups to test these commands, you can remove a resource group and all its contents by using the [az group delete](/cli/azure/group#az-group-delete) command:
+
+```azurecli
+az group delete --name ContosoVMRG
+
+az group delete --name ContosoRG
+```
+
+If you used existing resources that you want to keep, use the [az monitor metrics alert delete](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-delete) command to delete your practice alerts:
+
+```azurecli
+az monitor metrics alert delete --name alert-01
+
+az monitor metrics alert delete --name alert-02
+```
+
+## Azure CLI commands used in this article
+
+This article uses the following Azure CLI commands:
+
+- [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create)
+- [az appservice plan show](/cli/azure/appservice/plan#az-appservice-plan-show)
+- [az group create](/cli/azure/group#az-group-create)
+- [az group delete](/cli/azure/group#az-group-delete)
+- [az monitor action-group create](/cli/azure/monitor/action-group#az-monitor-action-group-create)
+- [az monitor metrics alert condition create](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-condition-create)
+- [az monitor metrics alert create](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-create)
+- [az monitor metrics alert delete](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-delete)
+- [az monitor metrics alert dimension create](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-dimension-create)
+- [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions)
+- [az vm show](/cli/azure/vm#az-vm-show)
+
+## Next steps
+
+- [Azure Monitor CLI samples](../cli-samples.md)
+- [Understand how metric alerts work in Azure Monitor](alerts-metric-overview.md)
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md
Title: Resource Manager template samples for log query alerts
description: Sample Azure Resource Manager templates to deploy Azure Monitor log query alerts. -- Last updated 05/11/2022
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
telemetry.InstrumentationKey = "my key";
## <a name="dynamic-ikey"></a> Dynamic instrumentation key
-To avoid mixing up telemetry from development, test, and production environments, you can [create separate Application Insights resources](./create-new-resource.md) and change their keys, depending on the environment.
+To avoid mixing up telemetry from development, test, and production environments, you can [create separate Application Insights resources](./create-workspace-resource.md) and change their keys, depending on the environment.
Instead of getting the instrumentation key from the configuration file, you can set it in your code. Set the key in an initialization method, such as `global.aspx.cs` in an ASP.NET service:
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
Title: Deploy Application Insights Agent description: Learn how to use Application Insights Agent to monitor website performance. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 03/13/2023 Last updated : 08/11/2023
This tab describes how to onboard to the PowerShell Gallery and download the App
Included are the most common parameters that you'll need to get started. We've also provided manual download instructions in case you don't have internet access.
-### Get an instrumentation key
+### Get a connection string
-To get started, you need an instrumentation key. For more information, see [Create an Application Insights resource](create-new-resource.md#copy-the-instrumentation-key).
+To get started, you need an connection string. For more information, see [Connection strings](sdk-connection-string.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
This tab describes the following cmdlets, which are members of the [Az.Applicati
- [Start-ApplicationInsightsMonitoringTrace](?tabs=api-reference#start-applicationinsightsmonitoringtrace) > [!NOTE]
-> - To get started, you need an instrumentation key. For more information, see [Create a resource](create-new-resource.md#copy-the-instrumentation-key).
+> - To get started, you need an instrumentation key. For more information, see [Create a resource](create-workspace-resource.md).
> - This cmdlet requires that you review and accept our license and privacy statement. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
You need:
- A functioning ASP.NET Core application. If you need to create an ASP.NET Core application, follow this [ASP.NET Core tutorial](/aspnet/core/getting-started/). - A reference to a supported version of the [Application Insights](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package.-- A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
+- A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-workspace-resource.md).
## Enable Application Insights server-side telemetry (Visual Studio)
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Title: Dependency tracking in Application Insights | Microsoft Docs description: Monitor dependency calls from your on-premises or Azure web application with Application Insights. Previously updated : 03/22/2023 Last updated : 08/11/2023 ms.devlang: csharp
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Last updated 03/22/2023
A Standard test is a type of availability test that checks the availability of a website by sending a single request. In addition to validating whether an endpoint is responding and measuring the performance, Standard tests also include SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`,`HEAD`, and `POST`), custom headers, and custom data associated with your HTTP request.
-To create an availability test, you must use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md).
+To create an availability test, you must use an existing Application Insights resource or [create an Application Insights resource](create-workspace-resource.md).
> [!TIP] > If you're currently using other availability tests, like URL ping tests, you might add Standard tests alongside the others. If you want to use Standard tests instead of one of your other tests, add a Standard test and delete your old test.
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 03/22/2023 Last updated : 08/11/2023 ms.devlang: csharp
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure App Service performance | Microsoft Docs description: Application performance monitoring for Azure App Service. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 03/01/2023 Last updated : 08/11/2023
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Autoinstrumentation for Azure Monitor Application Insights
description: Overview of autoinstrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 07/10/2023 Last updated : 08/11/2023
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Configure a [snapshot collection for ASP.NET applications](snapshot-debugger-vm.
[diagnostic]: ./diagnostic-search.md [exceptions]: ./asp-net-exceptions.md [netlogs]: ./asp-net-trace-logs.md
-[new]: ./create-new-resource.md
+[new]: ./create-workspace-resource.md
[redfield]: ./application-insights-asp-net-agent.md [start]: ./app-insights-overview.md
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 05/14/2023 Last updated : 08/11/2023
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
description: Learn how to track custom operations with the Application Insights
ms.devlang: csharp Previously updated : 11/26/2019 Last updated : 08/11/2023
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
See the [Limits summary](../service-limits.md#application-insights).
We don't log the POST data automatically, but you can use [TrackTrace or log calls](./asp-net-trace-logs.md). Put the POST data in the message parameter. You can't filter on the message in the same way you can filter on properties, but the size limit is longer.
+### Why does my Azure Function search return no results?
+
+The URL query strings are not logged by Azure Functions.
+ ## <a name="add"></a>Next steps * [Write complex queries in Analytics](../logs/log-analytics-tutorial.md)
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
Title: Application Insights logging with .NET description: Learn how to use Application Insights with the ILogger interface in .NET. Previously updated : 04/24/2023 Last updated : 08/11/2023 ms.devlang: csharp
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 03/22/2023 Last updated : 08/11/2023
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 07/20/2023 Last updated : 08/11/2023 ms.devlang: java
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 07/20/2023 Last updated : 08/11/2023 ms.devlang: java
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Previously updated : 04/24/2023 Last updated : 08/11/2023 ms.devlang: java
Sampling overrides allow you to override the [default sampling percentage](./jav
for example: * Set the sampling percentage to 0 (or some small value) for noisy health checks. * Set the sampling percentage to 0 (or some small value) for noisy dependency calls.
- * Set the sampling percentage to 100 for an important request type (e.g. `/login`)
+ * Set the sampling percentage to 100 for an important request type (for example, `/login`)
even though you have the default sampling configured to something lower. ## Terminology
If no sampling overrides match:
## Example: Suppress collecting telemetry for health checks
-This will suppress collecting telemetry for all requests to `/health-checks`.
+This example suppresses collecting telemetry for all requests to `/health-checks`.
-This will also suppress collecting any downstream spans (dependencies) that would normally be collected under
+This example also suppresses collecting any downstream spans (dependencies) that would normally be collected under
`/health-checks`. ```json
This will also suppress collecting any downstream spans (dependencies) that woul
## Example: Suppress collecting telemetry for a noisy dependency call
-This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
+This example suppresses collecting telemetry for all `GET my-noisy-key` redis calls.
```json {
This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
## Example: Collect 100% of telemetry for an important request type
-This will collect 100% of telemetry for `/login`.
+This example collects 100% of telemetry for `/login`.
Since downstream spans (dependencies) respect the parent's sampling decision (absent any sampling override for that downstream span),
-those will also be collected for all '/login' requests.
+those are also collected for all '/login' requests.
```json {
so attributes such as `http.status_code` which are captured later on can't be us
## Troubleshooting
-If you use `regexp` and the sampling override doesn't work, please try with the `.*` regex. If the sampling now works, it means
-you have an issue with the first regex and please read [this regex documentation](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
+If you use `regexp` and the sampling override doesn't work, try with the `.*` regex. If the sampling now works, it means
+you have an issue with the first regex and read [this regex documentation](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
-If it doesn't work with `.*`, you may have a syntax issue in your `application-insights.json file`. Please look at the Application Insights logs and see if you notice
+If it doesn't work with `.*`, you may have a syntax issue in your `application-insights.json file`. Look at the Application Insights logs and see if you notice
warning messages.
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
description: Learn how to install and use JavaScript framework extensions for th
ibiza Previously updated : 07/10/2023 Last updated : 08/11/2023 ms.devlang: javascript
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK description: Microsoft Azure Monitor Application Insights JavaScript SDK is a powerful tool for monitoring and analyzing web application performance. Previously updated : 07/10/2023 Last updated : 08/11/2023 ms.devlang: javascript
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 07/27/2023 Last updated : 08/11/2023 ms.devlang: csharp
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection strings description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings. Previously updated : 05/06/2023 Last updated : 08/11/2023
Connection strings provide a single configuration setting and eliminate the need
- **Reliability**: Connection strings make telemetry ingestion more reliable by removing dependencies on global ingestion endpoints. - **Security**: Connection strings allow authenticated telemetry ingestion by using [Azure Active Directory (Azure AD) authentication for Application Insights](azure-ad-authentication.md).-- **Customized endpoints (sovereign or hybrid cloud environments)**: Endpoint settings allow sending data to a specific [Azure Government region](create-new-resource.md#regions-that-require-endpoint-modification). ([See examples](sdk-connection-string.md#set-a-connection-string).)
+- **Customized endpoints (sovereign or hybrid cloud environments)**: Endpoint settings allow sending data to a specific Azure Government region. ([See examples](sdk-connection-string.md#set-a-connection-string).)
- **Privacy (regional endpoints)**: Connection strings ease privacy concerns by sending data to regional endpoints, ensuring data doesn't leave a geographic region. ## Supported SDK versions
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Before you begin, make sure that you have an Azure subscription, or [get a new o
### <a name="resource"></a> Set up an Application Insights resource 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Create an [Application Insights resource](create-new-resource.md).
+1. Create an [Application Insights resource](create-workspace-resource.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 05/01/2023 Last updated : 08/11/2023 ms.devlang: python
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Title: Add, modify, and filter Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to add, modify, and filter OpenTelemetry for applications using Azure Monitor. Previously updated : 06/22/2023 Last updated : 08/11/2023 ms.devlang: csharp, javascript, typescript, python
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Title: Configure Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Previously updated : 07/10/2023 Last updated : 08/11/2023 ms.devlang: csharp, javascript, typescript, python
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 07/20/2023 Last updated : 08/11/2023 ms.devlang: csharp, javascript, typescript, python
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Azure should set up the resources in strict order. To make sure one setup comple
See these other automation articles:
-* [Create an Application Insights resource](./create-new-resource.md#create-a-resource-automatically) via a quick method without using a template.
+* [Create an Application Insights resource](./create-workspace-resource.md)
* [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert). * [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md). * [Create release annotations](annotations.md).
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 07/10/2023 Last updated : 08/11/2023
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 07/10/2023 Last updated : 08/11/2023
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
var appInsights = window.appInsights || function(config){ ...
## Create more Application Insights resources
-To create an Applications Insights resource, see [Create an Application Insights resource](./create-new-resource.md).
+To create an Applications Insights resource, see [Create an Application Insights resource](./create-workspace-resource.md).
+
+> [!WARNING]
+> You may incur additional network costs if your Application Insights resource is monitoring an Azure resource (i.e., telemetry producer) in a different region. Costs will vary depending on the region the telemetry is coming from and where it is going. Refer to [Azure bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/) for details.
### Get the instrumentation key The instrumentation key identifies the resource that you created.
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
The best experience is obtained by installing Application Insights both in your
1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./opentelemetry-enable.md?tabs=java), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app.
- * If you don't want to install server code, [create an Application Insights resource](./create-new-resource.md).
+ * If you don't want to install server code, [create an Application Insights resource](./create-workspace-resource.md).
1. **Webpage code:** Use the JavaScript SDK to collect data from webpages. See [Get started with the JavaScript SDK](./javascript-sdk.md).
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The [Application Insights SDK for Worker Service](https://www.nuget.org/packages
## Prerequisites
-You must have a valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
+You must have a valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Connection Strings](./sdk-connection-string.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Virtual machines generate similar data as other Azure resources, but they requir
## Monitor containers
-Virtual machines generate similar data as other Azure resources, but they require a containerized version of the Log Analytics agent to collect required data. Container insights help you prepare your containerized environment for monitoring. It works in conjunction with third-party tools to provide comprehensive monitoring of Azure Kubernetes Service (AKS) and the workflows it supports. See [Monitoring Azure Kubernetes Service with Azure Monitor](../aks/monitor-aks.md?toc=/azure/azure-monitor/toc.json) for a dedicated scenario on monitoring AKS with Azure Monitor.
+Containers generate similar data as other Azure resources, but they require a containerized version of the Log Analytics agent to collect required data. Container insights help you prepare your containerized environment for monitoring. It works in conjunction with third-party tools to provide comprehensive monitoring of Azure Kubernetes Service (AKS) and the workflows it supports. See [Monitoring Azure Kubernetes Service with Azure Monitor](../aks/monitor-aks.md?toc=/azure/azure-monitor/toc.json) for a dedicated scenario on monitoring AKS with Azure Monitor.
## Monitor applications
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Title: Configure GPU monitoring with Container insights
description: This article describes how you can configure monitoring Kubernetes clusters with NVIDIA and AMD GPU enabled nodes with Container insights. Previously updated : 05/24/2022 Last updated : 08/09/2023
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
- Title: Configure hybrid Kubernetes clusters with Container insights | Microsoft Docs
-description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environments.
- Previously updated : 06/30/2020---
-# Configure hybrid Kubernetes clusters with Container insights
-
-Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
-
-## Supported configurations
-
-The following configurations are officially supported with Container insights. If you have a different version of Kubernetes and operating system versions, please open a support ticket..
--- Environments:
- - Kubernetes on-premises.
- - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
- - [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or in other cloud environments.
-- Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md).-- The following container runtimes are supported: Moby and CRI compatible runtimes such CRI-O and ContainerD.-- The Linux OS release for main and worker nodes supported are Ubuntu (18.04 LTS and 16.04 LTS) and Red Hat Enterprise Linux CoreOS 43.81.-- Azure Access Control service supported: Kubernetes role-based access control (RBAC) and non-RBAC.-
-## Prerequisites
-
-Before you start, make sure that you meet the following prerequisites:
--- You have a [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or the [Azure portal](../logs/quick-create-workspace.md).-
- >[!NOTE]
- >Enabling the monitoring of multiple clusters with the same cluster name to the same Log Analytics workspace isn't supported. Cluster names must be unique.
- >
--- You're a member of the Log Analytics contributor role to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md).-- To view the monitoring data, you must have the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.-- You have a [Helm client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.-- The following proxy and firewall configuration information is required for the containerized version of the Log Analytics agent for Linux to communicate with Azure Monitor:-
- |Agent resource|Ports |
- |||
- |*.ods.opinsights.azure.com |Port 443 |
- |*.oms.opinsights.azure.com |Port 443 |
- |*.dc.services.visualstudio.com |Port 443 |
--- The containerized agent requires the Kubelet `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend that you configure `secure port: 10250` on the Kubelet cAdvisor if it isn't configured already.-- The containerized agent requires the following environmental variables to be specified on the container to communicate with the Kubernetes API service within the cluster to collect inventory data: `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`.-
->[!IMPORTANT]
->The minimum agent version supported for monitoring hybrid Kubernetes clusters is *ciprod10182019* or later.
-
-## Enable monitoring
-
-To enable Container insights for the hybrid Kubernetes cluster:
-
-1. Configure your Log Analytics workspace with the Container insights solution.
-
-1. Enable the Container insights Helm chart with a Log Analytics workspace.
-
-For more information on monitoring solutions in Azure Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions).
-
-### Add the Azure Monitor Containers solution
-
-You can deploy the solution with the provided Azure Resource Manager template by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with the Azure CLI.
-
-If you're unfamiliar with the concept of deploying resources by using a template, see:
--- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)-- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-This method includes two JSON templates. One template specifies the configuration to enable monitoring. The other template contains parameter values that you configure to specify:
--- `workspaceResourceId`: The full resource ID of your Log Analytics workspace.-- `workspaceRegion`: The region the workspace is created in, which is also referred to as **Location** in the workspace properties when you view them from the Azure portal.-
-To first identify the full resource ID of your Log Analytics workspace that's required for the `workspaceResourceId` parameter value in the *containerSolutionParams.json* file, perform the following steps. Then run the PowerShell cmdlet or Azure CLI command to add the solution.
-
-1. List all the subscriptions to which you have access by using the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
- The output will resemble the following example:
-
- ```azurecli
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
- ```
-
- Copy the value for **SubscriptionId**.
-
-1. Switch to the subscription hosting the Log Analytics workspace by using the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-1. The following example displays the list of workspaces in your subscriptions in the default JSON format:
-
- ```azurecli
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
- In the output, find the workspace name. Then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-
-1. Copy and paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "workspaceResourceId": {
- "type": "string",
- "metadata": {
- "description": "Azure Monitor Log Analytics Workspace Resource ID"
- }
- },
- "workspaceRegion": {
- "type": "string",
- "metadata": {
- "description": "Azure Monitor Log Analytics Workspace region"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Resources/deployments",
- "name": "[Concat('ContainerInsights', '-', uniqueString(parameters('workspaceResourceId')))]",
- "apiVersion": "2017-05-10",
- "subscriptionId": "[split(parameters('workspaceResourceId'),'/')[2]]",
- "resourceGroup": "[split(parameters('workspaceResourceId'),'/')[4]]",
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
- "variables": {},
- "resources": [
- {
- "apiVersion": "2015-11-01-preview",
- "type": "Microsoft.OperationsManagement/solutions",
- "location": "[parameters('workspaceRegion')]",
- "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]",
- "properties": {
- "workspaceResourceId": "[parameters('workspaceResourceId')]"
- },
- "plan": {
- "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]",
- "product": "[Concat('OMSGallery/', 'ContainerInsights')]",
- "promotionCode": "",
- "publisher": "Microsoft"
- }
- }
- ]
- },
- "parameters": {}
- }
- }
- ]
- }
- ```
-
-1. Save this file as **containerSolution.json** to a local folder.
-
-1. Paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "workspaceResourceId": {
- "value": "<workspaceResourceId>"
- },
- "workspaceRegion": {
- "value": "<workspaceRegion>"
- }
- }
- }
- ```
-
-1. Edit the values for **workspaceResourceId** by using the value you copied in step 3. For **workspaceRegion**, copy the **Region** value after running the Azure CLI command [az monitor log-analytics workspace show](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-list&preserve-view=true).
-
-1. Save this file as **containerSolutionParams.json** to a local folder.
-
-1. You're ready to deploy this template.
-
- - To deploy with Azure PowerShell, use the following commands in the folder that contains the template:
-
- ```powershell
- # configure and login to the cloud of Log Analytics workspace.Specify the corresponding cloud environment of your workspace to below command.
- Connect-AzureRmAccount -Environment <AzureCloud | AzureChinaCloud | AzureUSGovernment>
- ```
-
- ```powershell
- # set the context of the subscription of Log Analytics workspace
- Set-AzureRmContext -SubscriptionId <subscription Id of Log Analytics workspace>
- ```
-
- ```powershell
- # execute deployment command to add Container Insights solution to the specified Log Analytics workspace
- New-AzureRmResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <resource group of Log Analytics workspace> -TemplateFile .\containerSolution.json -TemplateParameterFile .\containerSolutionParams.json
- ```
-
- The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
-
- ```powershell
- provisioningState : Succeeded
- ```
-
- - To deploy with the Azure CLI, run the following commands:
-
- ```azurecli
- az login
- az account set --name <AzureCloud | AzureChinaCloud | AzureUSGovernment>
- az login
- az account set --subscription "Subscription Name"
- # execute deployment command to add container insights solution to the specified Log Analytics workspace
- az deployment group create --resource-group <resource group of log analytics workspace> --name <deployment name> --template-file ./containerSolution.json --parameters @./containerSolutionParams.json
- ```
-
- The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
-
- ```azurecli
- provisioningState : Succeeded
- ```
-
- After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-## Install the Helm chart
-
-In this section, you install the containerized agent for Container insights. Before you proceed, identify the workspace ID required for the `amalogsagent.secret.wsid` parameter and the primary key required for the `amalogsagent.secret.key` parameter. To identify this information, follow these steps and then run the commands to install the agent by using the Helm chart.
-
-1. Run the following command to identify the workspace ID:
-
- `az monitor log-analytics workspace list --resource-group <resourceGroupName>`
-
- In the output, find the workspace name under the field **name**. Then copy the workspace ID of that Log Analytics workspace under the field **customerID**.
-
-1. Run the following command to identify the primary key for the workspace:
-
- `az monitor log-analytics workspace get-shared-keys --resource-group <resourceGroupName> --workspace-name <logAnalyticsWorkspaceName>`
-
- In the output, find the primary key under the field **primarySharedKey** and then copy the value.
-
- >[!NOTE]
- >The following commands are applicable only for Helm version 2. Use of the `--name` parameter isn't applicable with Helm version 3.
-
- If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster doesn't communicate through a proxy server, you don't need to specify this parameter. For more information, see the section [Configure the proxy endpoint](#configure-the-proxy-endpoint) later in this article.
-
-1. Add the Azure charts repository to your local list by running the following command:
-
- ```
- helm repo add microsoft https://microsoft.github.io/charts/repo
- ````
-
-1. Install the chart by running the following command:
-
- ```
- $ helm install --name myrelease-1 \
- --set amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<my_prod_cluster> microsoft/azuremonitor-containers
- ```
-
- If the Log Analytics workspace is in Microsoft Azure operated by 21Vianet, run the following command:
-
- ```
- $ helm install --name myrelease-1 \
- --set amalogsagent.domain=opinsights.azure.cn,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
- ```
-
- If the Log Analytics workspace is in Azure US Government, run the following command:
-
- ```
- $ helm install --name myrelease-1 \
- --set amalogsagent.domain=opinsights.azure.us,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
- ```
-
-### Enable the Helm chart by using the API model
-
-You can specify an add-on in the AKS Engine cluster specification JSON file, which is also referred to as the API model. In this add-on, provide the base64-encoded version of `WorkspaceGUID` and `WorkspaceKey` of the Log Analytics workspace where the collected monitoring data is stored. You can find `WorkspaceGUID` and `WorkspaceKey` by using steps 1 and 2 in the previous section.
-
-Supported API definitions for the Azure Stack Hub cluster can be found in the example [kubernetes-container-monitoring_existing_workspace_id_and_key.json](https://github.com/Azure/aks-engine/blob/master/examples/addons/container-monitoring/kubernetes-container-monitoring_existing_workspace_id_and_key.json). Specifically, find the **addons** property in **kubernetesConfig**:
-
-```json
-"orchestratorType": "Kubernetes",
- "kubernetesConfig": {
- "addons": [
- {
- "name": "container-monitoring",
- "enabled": true,
- "config": {
- "workspaceGuid": "<Azure Log Analytics Workspace Id in Base-64 encoded>",
- "workspaceKey": "<Azure Log Analytics Workspace Key in Base-64 encoded>"
- }
- }
- ]
- }
-```
-
-## Configure agent data collection
-
-Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
-
-After you've successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
-
->[!NOTE]
->Ingestion latency is around 5 to 10 minutes from the agent to commit in the Log Analytics workspace. Status of the cluster shows the value **No data** or **Unknown** until all the required monitoring data is available in Azure Monitor.
-
-## Configure the proxy endpoint
-
-Starting with chart version 2.7.1, the chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. In this way, it can communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server. Both anonymous and basic authentication with a username and password are supported.
-
-The proxy configuration value has the syntax `[protocol://][user:password@]proxyhost[:port]`.
-
-> [!NOTE]
->If your proxy server doesn't require authentication, you still need to specify a pseudo username and password. It can be any username or password.
-
-|Property| Description |
-|--|-|
-|protocol | HTTP or HTTPS |
-|user | Optional username for proxy authentication |
-|password | Optional password for proxy authentication |
-|proxyhost | Address or FQDN of the proxy server |
-|port | Optional port number for the proxy server |
-
-An example is `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`.
-
-If you specify the protocol as **http**, the HTTP requests are created by using an SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
-
-## Troubleshooting
-
-If you encounter an error while you attempt to enable monitoring for your hybrid Kubernetes cluster, copy the PowerShell script [TroubleshootError_nonAzureK8s.ps1](https://aka.ms/troubleshoot-non-azure-k8s) and save it to a folder on your computer. This script is provided to help you detect and fix the issues you encounter. It's designed to detect and attempt correction of the following issues:
--- The specified Log Analytics workspace is valid.-- The Log Analytics workspace is configured with the Container insights solution. If not, configure the workspace.-- The Azure Monitor Agent replicaset pods are running.-- The Azure Monitor Agent daemonset pods are running.-- The Azure Monitor Agent Health service is running.-- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace that the insight is configured with.-- Validate that all the Linux worker nodes have the `kubernetes.io/role=agent` label to the schedulers pod. If it doesn't exist, add it.-- Validate that `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster.-
-To execute with Azure PowerShell, use the following commands in the folder that contains the script:
-
-```powershell
-.\TroubleshootError_nonAzureK8s.ps1 - azureLogAnalyticsWorkspaceResourceId </subscriptions/<subscriptionId>/resourceGroups/<resourcegroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName> -kubeConfig <kubeConfigFile> -clusterContextInKubeconfig <clusterContext>
-```
-
-## Next steps
-
-Now that monitoring is enabled to collect health and resource utilization of your hybrid Kubernetes clusters and workloads are running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
- Title: How to stop monitoring your hybrid Kubernetes cluster | Microsoft Docs
-description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights.
- Previously updated : 05/24/2022----
-# How to stop monitoring your hybrid cluster
-
-After you enable monitoring of your Kubernetes cluster, you can stop monitoring the cluster with Container insights if you decide you no longer want to monitor it. This article shows how to accomplish this for the following environments:
--- AKS Engine on Azure and Azure Stack-- OpenShift version 4 and higher-- Azure Arc-enabled Kubernetes (preview)-
-## How to stop monitoring using Helm
-
-The following steps apply to the following environments:
--- AKS Engine on Azure and Azure Stack-- OpenShift version 4 and higher-
-1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command.
-
- ```
- helm list
- ```
-
- The output will resemble the following:
-
- ```
- NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1
- ```
-
- *azmon-containers-release-1* represents the helm chart release for Container insights.
-
-2. To delete the chart release, run the following helm command.
-
- `helm delete <releaseName>`
-
- Example:
-
- `helm delete azmon-containers-release-1`
-
- This will remove the release from the cluster. You can verify by running the `helm list` command:
-
- ```
- NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- ```
-
-The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`.
-
-## How to stop monitoring on Azure Arc-enabled Kubernetes
-
-### Using PowerShell
-
-1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
-
- ```powershell
- wget https://aka.ms/disable-monitoring-powershell-script -OutFile disable-monitoring.ps1
- ```
-
-2. Configure the `$azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
-
- ```powershell
- $azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
- ```
-
-3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`.
-
- ```powershell
- $kubeContext = "<kubeContext name of your k8s cluster>"
- ```
-
-4. Run the following command to stop monitoring the cluster.
-
- ```powershell
- .\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext
- ```
-
-#### Using service principal
-The script *disable-monitoring.ps1* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you will have to pass $servicePrincipalClientId, $servicePrincipalClientSecret and $tenantId parameters with values of service principal you have intended to use to enable-monitoring.ps1 script.
-
-```powershell
-$subscriptionId = "<subscription Id of the Azure Arc-connected cluster resource>"
-$servicePrincipal = New-AzADServicePrincipal -Role Contributor -Scope "/subscriptions/$subscriptionId"
-
-$servicePrincipalClientId = $servicePrincipal.ApplicationId.ToString()
-$servicePrincipalClientSecret = [System.Net.NetworkCredential]::new("", $servicePrincipal.Secret).Password
-$tenantId = (Get-AzSubscription -SubscriptionId $subscriptionId).TenantId
-```
-
-For example:
-
-```powershell
-\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext -servicePrincipalClientId $servicePrincipalClientId -servicePrincipalClientSecret $servicePrincipalClientSecret -tenantId $tenantId
-```
--
-### Using bash
-
-1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
-
- ```bash
- curl -o disable-monitoring.sh -L https://aka.ms/disable-monitoring-bash-script
- ```
-
-2. Configure the `azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
-
- ```bash
- export AZUREARCCLUSTERRESOURCEID="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
- ```
-
-3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
-
- ```bash
- export KUBECONTEXT="<kubeContext name of your k8s cluster>"
- ```
-
-4. To stop monitoring your cluster, there are different commands provided based on your deployment scenario.
-
- Run the following command to stop monitoring the cluster using the current context.
-
- ```bash
- bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID
- ```
-
- Run the following command to stop monitoring the cluster by specifying a context
-
- ```bash
- bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT
- ```
-
-#### Using service principal
-The bash script *disable-monitoring.sh* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you will have to pass --client-id, --client-secret and --tenant-id values of service principal you have intended to use to *enable-monitoring.sh* bash script.
-
-```bash
-SUBSCRIPTIONID="<subscription Id of the Azure Arc-connected cluster resource>"
-SERVICEPRINCIPAL=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTIONID}")
-SERVICEPRINCIPALCLIENTID=$(echo $SERVICEPRINCIPAL | jq -r '.appId')
-
-SERVICEPRINCIPALCLIENTSECRET=$(echo $SERVICEPRINCIPAL | jq -r '.password')
-TENANTID=$(echo $SERVICEPRINCIPAL | jq -r '.tenant')
-```
-
-For example:
-
-```bash
-bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT --client-id $SERVICEPRINCIPALCLIENTID --client-secret $SERVICEPRINCIPALCLIENTSECRET --tenant-id $TENANTID
-```
-
-## Next steps
-
-If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-monitor Container Insights Optout Openshift V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v3.md
- Title: How to stop monitoring your Azure Red Hat OpenShift v3 cluster | Microsoft Docs
-description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift cluster with Container insights.
-- Previously updated : 05/24/2022---
-# How to stop monitoring your Azure Red Hat OpenShift v3 cluster
-
->[!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired June 2022.
->
-> As of October 2020 you will no longer be able to create new 3.11 clusters.
-> Existing 3.11 clusters will continue to operate until June 2022 but will no be longer supported after that date.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](../../openshift/tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:aro-feedback@microsoft.com).
-
-After you enable monitoring of your Azure Red Hat OpenShift version 3.x cluster, you can stop monitoring the cluster with Container insights if you decide you no longer want to monitor it. This article shows how to accomplish this using the Azure Resource Manager template provided.
-
-## Azure Resource Manager template
-
-Provided are two Azure Resource Manager template to support removing the solution resources consistently and repeatedly in your resource group. One is a JSON template specifying the configuration to stop monitoring and the other contains parameter values that you configure to specify the OpenShift cluster resource ID and Azure region that the cluster is deployed in.
-
-If you're unfamiliar with the concept of deploying resources by using a template, see:
-* [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)
-* [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)
-
-If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.65 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-### Create template
-
-1. Copy and paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "aroResourceId": {
- "type": "string",
- "metadata": {
- "description": "ARO Cluster Resource ID"
- }
- },
- "aroResourceLocation": {
- "type": "string",
- "metadata": {
- "description": "Location of the aro cluster resource e.g. westcentralus"
- }
- }
- },
- "resources": [
- {
- "name": "[split(parameters('aroResourceId'),'/')[8]]",
- "type": "Microsoft.ContainerService/openShiftManagedClusters",
- "location": "[parameters('aroResourceLocation')]",
- "apiVersion": "2019-09-30-preview",
- "properties": {
- "mode": "Incremental",
- "id": "[parameters('aroResourceId')]",
- "monitorProfile": {
- "workspaceResourceID": null,
- "enabled": false
- }
- }
- }
- ]
- }
- ```
-
-2. Save this file as **OptOutTemplate.json** to a local folder.
-
-3. Paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "aroResourceId": {
- "value": "/subscriptions/<subscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.ContainerService/openShiftManagedClusters/<clusterName>"
- },
- "aroResourceLocation": {
- "value": "<azure region of the cluster e.g. westcentralus>"
- }
- }
- }
- ```
-
-4. Edit the values for **aroResourceId** and **aroResourceLocation** by using the values of the OpenShift cluster, which you can find on the **Properties** page for the selected cluster.
-
- ![Container properties page](media/container-insights-optout-openshift/cluster-properties-page.png)
-
-5. Save this file as **OptOutParam.json** to a local folder.
-
-6. You are ready to deploy this template.
-
-### Remove the solution using Azure CLI
-
-Execute the following command with Azure CLI on Linux to remove the solution and clean up the configuration on your cluster.
-
-```azurecli
-az login
-az account set --subscription "Subscription Name"
-az deployment group create --resource-group <ResourceGroupName> --template-file ./OptOutTemplate.json --parameters @./OptOutParam.json
-```
-
-The configuration change can take a few minutes to complete. When it's completed, a message similar to the following that includes the result is returned:
-
-```output
-ProvisioningState : Succeeded
-```
-
-### Remove the solution using PowerShell
--
-Execute the following PowerShell commands in the folder containing the template to remove the solution and clean up the configuration from your cluster.
-
-```powershell
-Connect-AzAccount
-Select-AzSubscription -SubscriptionName <yourSubscriptionName>
-New-AzResourceGroupDeployment -Name opt-out -ResourceGroupName <ResourceGroupName> -TemplateFile .\OptOutTemplate.json -TemplateParameterFile .\OptOutParam.json
-```
-
-The configuration change can take a few minutes to complete. When it's completed, a message similar to the following that includes the result is returned:
-
-```output
-ProvisioningState : Succeeded
-```
-
-## Next steps
-
-If the workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-monitor Container Insights Optout Openshift V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v4.md
- Title: How to stop monitoring your Azure and Red Hat OpenShift v4 cluster | Microsoft Docs
-description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift and Red Hat OpenShift version 4 cluster with Container insights.
- Previously updated : 05/24/2022----
-# How to stop monitoring your Azure and Red Hat OpenShift v4 cluster
-
-After you enable monitoring of your Azure Red Hat OpenShift and Red Hat OpenShift version 4.x cluster, you can stop monitoring the cluster with Container insights if you decide you no longer want to monitor it. This article shows how to accomplish this.
-
-## How to stop monitoring using Helm
-
-1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command.
-
- ```
- helm list --all-namespaces
- ```
-
- The output will resemble the following:
-
- ```
- NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1
- ```
-
- *azmon-containers-release-1* represents the helm chart release for Container insights.
-
-2. To delete the chart release, run the following helm command.
-
- `helm delete <releaseName>`
-
- Example:
-
- `helm delete azmon-containers-release-1`
-
- This will remove the release from the cluster. You can verify by running the `helm list` command:
-
- ```
- NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- ```
-
-The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`.
-
-## Next steps
-
-If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
This step is only required if you didn't enable Azure Key Vault Provider for Sec
# Required only for AAD based auth volumes: - name: secrets-store-inline
- csi:
- driver: secrets-store.csi.k8s.io
- readOnly: true
- volumeAttributes:
- secretProviderClass: azure-kvname-user-msi
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: azure-kvname-user-msi
containers: - name: prom-remotewrite image: <CONTAINER-IMAGE-VERSION>
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
na Previously updated : 07/28/2022 Last updated : 08/09/2023
Azure resources generate a significant amount of monitoring data. Azure Monitor
Metrics in Azure Monitor are stored in a time-series database that's optimized for analyzing time-stamped data. Time-stamping makes metrics well suited for alerting and fast detection of issues. Metrics can tell you how your system is performing but typically must be combined with logs to identify the root cause of issues.
-Metrics are available for interactive analysis in the Azure portal with [Azure Metrics Explorer](essentials/metrics-getting-started.md). They can be added to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data and used for near-real-time [alerting](alerts/alerts-metric.md).
-
-To read more about Azure Monitor metrics, including their sources of data, see [Metrics in Azure Monitor](essentials/data-platform-metrics.md).
+Azure Monitor Metrics includes two types of metrics - native metrics and Prometheus metrics. See a comparison of the two and further details about Azure Monitor metrics, including their sources of data, at [Metrics in Azure Monitor](essentials/data-platform-metrics.md).
### Logs
Once [Change Analysis is enabled](./change/change-analysis-enable.md), the `Micr
Read more about Change Analysis, including data sources in [Use Change Analysis in Azure Monitor](./change/change-analysis.md).
-## Compare Azure Monitor metrics and logs
-
-The following table compares metrics and logs in Azure Monitor.
-
-| Attribute | Metrics | Logs |
-|:|:|:|
-| Benefits | Lightweight and capable of near-real time scenarios such as alerting. Ideal for fast detection of issues. | Analyzed with rich query language. Ideal for deep analysis and identifying root cause. |
-| Data | Numerical values only | Text or numeric data |
-| Structure | Standard set of properties including sample time, resource being monitored, a numeric value. Some metrics include multiple dimensions for further definition. | Unique set of properties depending on the log type. |
-| Collection | Collected at regular intervals. | May be collected sporadically as events trigger a record to be created. |
-| Analyze in Azure portal | Metrics Explorer | Log Analytics |
-| Data sources include | Platform metrics collected from Azure resources<br>Applications monitored by Application Insights<br>Azure Monitor agent<br>Custom defined by application or API | Application and resource logs<br>Azure Monitor agent<br>Application requests and exceptions<br>Logs ingestion API<br>Azure Sentinel<br>Microsoft Defender for Cloud |
- ## Collect monitoring data Different [sources of data for Azure Monitor](data-sources.md) will write to either a Log Analytics workspace (Logs) or the Azure Monitor metrics database (Metrics) or both. Some sources will write directly to these data stores, while others may write to another location such as Azure storage and require some configuration to populate logs or metrics.
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
To deploy the ARM template, we use Azure PowerShell.
1. In the resource dropdown menu, select the VM that you created. If you didn't change the name in the template, it should be **SimpleWinVM2**.
-1. In the namespaces dropdown list, select **azure.vm.windows.guest**.
+1. In the namespaces dropdown list, select **azure.vm.windows.guestmetrics**.
1. In the metrics dropdown list, select **Memory\%Committed Bytes in Use**.
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
description: Overview of data collection rules (DCRs) in Azure Monitor including
Previously updated : 07/15/2022 Last updated : 08/08/2023
The following resources describe different scenarios for creating DCRs. In some
|:|:|:| | Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then apply that rule to one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. | | | [Use Azure Policy to install Azure Monitor Agent and associate with a DCR](../agents/azure-monitor-agent-manage.md#use-azure-policy) | Use Azure Policy to install Azure Monitor Agent and associate one or more DCRs with any virtual machines or virtual machine scale sets as they're created in your subscription.
-| Custom logs | [Configure custom logs by using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs by using Azure Resource Manager templates and the REST API](../logs/tutorial-logs-ingestion-api.md)<br>[Configure custom logs by using Azure Monitorint Agent](../agents/data-collection-text-log.md) | Send custom data by using a REST API or Agent. The API call connects to a data collection endpoint and specifies a DCR to use. The agent uses the DCR to configure the collection of data on a machine. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. |
+| Text logs | [Configure custom logs by using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs by using Azure Resource Manager templates and the REST API](../logs/tutorial-logs-ingestion-api.md)<br>[Configure text logs by using Azure Monitoring Agent](../agents/data-collection-text-log.md) | Send custom data by using a REST API or Agent. The API call connects to a data collection endpoint and specifies a DCR to use. The agent uses the DCR to configure the collection of data on a machine. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. |
| Azure Event Hubs | [Ingest events from Azure Event Hubs to Azure Monitor Logs](../logs/ingest-logs-event-hub.md)| Collect data from multiple sources to an event hub and ingest the data you need directly into tables in one or more Log Analytics workspaces. This is a highly scalable method of collecting data from a wide range of sources with minimum configuration.| | Workspace transformation | [Configure ingestion-time transformations by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Configure ingestion-time transformations by using Azure Resource Manager templates and the REST API](../logs/tutorial-workspace-transformations-api.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. |
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
description: Details on the structure of different kinds of data collection rule
Previously updated : 07/10/2022 Last updated : 08/08/2023 ms.reviwer: nikeist
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Previously updated : 02/13/2022 Last updated : 08/13/2023
You can configure diagnostic settings in the Azure portal either from the Azure
![Screenshot that shows the Settings section in the Azure Monitor menu with Diagnostic settings highlighted.](media/diagnostic-settings/menu-monitor.png)
- - For the activity log, select **Activity log** on the **Azure Monitor** menu and then select **Diagnostic settings**. Make sure you disable any legacy configuration for the activity log. For instructions, see [Disable existing settings](./activity-log.md#legacy-collection-methods).
+ - For the activity log, select **Activity log** on the **Azure Monitor** menu and then select **Export Activity Logs**. Make sure you disable any legacy configuration for the activity log. For instructions, see [Disable existing settings](./activity-log.md#legacy-collection-methods).
- ![Screenshot that shows the Azure Monitor menu with Activity log selected and Diagnostic settings highlighted in the Monitor-Activity log menu bar.](media/diagnostic-settings/menu-activity-log.png)
+ ![Screenshot that shows the Azure Monitor menu with Activity log selected and Export activity logs highlighted in the Monitor-Activity log menu bar.](media/diagnostic-settings/menu-activity-log.png)
1. If no settings exist on the resource you've selected, you're prompted to create a setting. Select **Add diagnostic setting**.
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Kubernetes Service |[Azure Kubernetes Service logging](../../aks/monitor-aks-reference.md#resource-logs) | | Azure Load Balancer |[Log Analytics for Azure Load Balancer](../../load-balancer/monitor-load-balancer.md) | | Azure Load Testing |[Azure Load Testing logs](../../load-testing/monitor-load-testing-reference.md#resource-logs) |
-| Azure Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) |
+| Azure Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/tracking-schemas-as2-x12-custom.md) |
| Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](../../machine-learning/monitor-resource-reference.md) | | Azure Media Services | [Media Services monitoring schemas](/azure/media-services/latest/monitoring/monitor-media-services#schemas) | | Network security groups |[Log Analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) |
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Title: Azure resource logs description: Learn how to stream Azure resource logs to a Log Analytics workspace in Azure Monitor.- - Previously updated : 07/26/2023- Last updated : 08/08/2023
Most Azure resources write data to the workspace in either **Azure diagnostics**
All Azure services will eventually use the resource-specific mode. As part of this transition, some resources allow you to select a mode in the diagnostic setting. Specify resource-specific mode for any new diagnostic settings because this mode makes the data easier to manage. It also might help you avoid complex migrations later.
- ![Screenshot that shows the Diagnostics settings mode selector.](media/resource-logs/diagnostic-settings-mode-selector.png)
+ > [!NOTE] > For an example that sets the collection mode by using an Azure Resource Manager template, see [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md#diagnostic-setting-for-recovery-services-vault).
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
Title: Resource Manager template samples for diagnostic settings
description: Sample Azure Resource Manager templates to apply Azure Monitor diagnostic settings to an Azure resource. -- Previously updated : 06/13/2022++ Last updated : 08/09/2023
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Title: Stream Azure monitoring data to an event hub and external partners description: Learn how to stream your Azure monitoring data to an event hub to get the data into a partner SIEM or analytics tool. --++ Previously updated : 07/15/2020 Last updated : 08/09/2023 # Stream Azure monitoring data to an event hub or external partner
-Azure Monitor provides full stack monitoring for applications and services in Azure, in other clouds, and on-premises. In most cases, the most effective method to stream monitoring data to external tools is by using [Azure Event Hubs](../../event-hubs/index.yml). This article provides a brief description on how to stream data and then lists some of the partners where you can send it. Some partners have special integration with Azure Monitor and might be hosted on Azure.
+In most cases, the most effective method to stream data from Azure Monitor to external tools is by using [Azure Event Hubs](../../event-hubs/index.yml). This article provides a brief description on how to stream data and then lists some of the partners where you can send it. Some partners have special integration with Azure Monitor and might be hosted on Azure.
## Create an Event Hubs namespace
Before you configure streaming for any data source, you need to [create an Event
* Outbound port 5671 and 5672 must typically be opened on the computer or virtual network consuming data from the event hub. ## Monitoring data available
-[Sources of monitoring data for Azure Monitor](../data-sources.md) describes the data tiers for Azure applications and the kinds of data available for each. The following table lists each of these tiers and a description of how that data can be streamed to an event hub. Follow the links provided for further detail.
+[Sources of monitoring data for Azure Monitor](../data-sources.md) describes the data tiers for Azure applications and the kinds of data available for each. The following table provides a description of how different types of data can be streamed to an event hub. Follow the links provided for further detail.
| Tier | Data | Method | |:|:|:| | [Azure tenant](../data-sources.md#azure-tenant) | Azure Active Directory audit logs | Configure a tenant diagnostic setting on your Azure Active Directory tenant. For more information, see [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md). |
-| [Azure subscription](../data-sources.md#azure-subscription) | Azure activity log | Create a log profile to export activity log events to event hubs. For more information, see [Stream Azure platform logs to Azure event hubs](../essentials/resource-logs.md#send-to-azure-event-hubs). |
-| [Azure resources](../data-sources.md#azure-resources) | Platform metrics<br> Resource logs |Both types of data are sent to an event hub by using a resource diagnostic setting. For more information, see [Stream Azure resource logs to an event hub](../essentials/resource-logs.md#send-to-azure-event-hubs). |
+| [Azure subscription](../data-sources.md#azure-subscription) | Azure activity log | [Create a diagnostic setting](diagnostic-settings.md#create-diagnostic-settings) to export activity log events to event hubs. For more information, see [Stream Azure platform logs to Azure event hubs](../essentials/resource-logs.md#send-to-azure-event-hubs). |
+| [Azure resources](../data-sources.md#azure-resources) | Platform metrics<br> Resource logs | [Create a diagnostic setting](diagnostic-settings.md#create-diagnostic-settings) to export resource logs and metrics to event hubs. For more information, see [Stream Azure platform logs to Azure event hubs](../essentials/resource-logs.md#send-to-azure-event-hubs). |
| [Operating system (guest)](../data-sources.md#operating-system-guest) | Azure virtual machines | Install the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md) on Windows and Linux virtual machines in Azure. For more information, see [Streaming Azure Diagnostics data in the hot path by using event hubs](../agents/diagnostics-extension-stream-event-hubs.md) for details on Windows VMs. See [Use Linux Diagnostic extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md#protected-settings) for details on Linux VMs. | | [Application code](../data-sources.md#application-code) | Application Insights | Use diagnostic settings to stream to event hubs. This tier is only available with workspace-based Application Insights resources. For help with setting up workspace-based Application Insights resources, see [Workspace-based Application Insights resources](../app/create-workspace-resource.md#workspace-based-application-insights-resources) and [Migrate to workspace-based Application Insights resources](../app/convert-classic-resource.md#migrate-to-workspace-based-application-insights-resources).|
The following JSON is an example of log data sent to an event hub:
## Manual streaming with a logic app
-For data that you can't directly stream to an event hub, you can write to Azure Storage Then you can use a time-triggered logic app that [pulls data from Azure Blob Storage](../../connectors/connectors-create-api-azureblobstorage.md#add-action) and [pushes it as a message to the event hub](../../connectors/connectors-create-api-azure-event-hubs.md#add-action).
+For data that you can't directly stream to an event hub, you can write to Azure Storage and then you can use a time-triggered logic app that [pulls data from Azure Blob Storage](../../connectors/connectors-create-api-azureblobstorage.md#add-action) and [pushes it as a message to the event hub](../../connectors/connectors-create-api-azure-event-hubs.md#add-action).
-## Query events from your Event Hubs
-
-Use the process data query function to see the contents of monitoring events sent to your event hub.
-
-Follow the steps below to query your event data using the Azure portal:
-1. Select **Process data** from your event hub.
-1. Find the tile entitled **Enable real time insights from events** and select **Start**.
-1. Select **Refresh** in the **Input preview** section of the page to fetch events from your event hub.
- ## Partner tools with Azure Monitor integration
Routing your monitoring data to an event hub with Azure Monitor enables you to e
Other partners might also be available. For a more complete list of all Azure Monitor partners and their capabilities, see [Azure Monitor partner integrations](../partners.md). ## Next steps
-* [Archive the activity log to a storage account](./activity-log.md#legacy-collection-methods)
* [Read the overview of the Azure activity log](../essentials/platform-logs-overview.md) * [Set up an alert based on an activity log event](../alerts/alerts-log-webhook.md)
azure-monitor Tutorial Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-metrics.md
description: Learn how to analyze metrics for an Azure resource by using metrics
Previously updated : 11/08/2021 Last updated : 08/08/2023
azure-monitor Tutorial Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md
description: Learn how to configure diagnostic settings to send resource logs fr
Previously updated : 11/08/2021 Last updated : 08/08/2023
azure-monitor Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md
With Code Optimizations, you can:
## Demo video
-<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/eu1P_vLTZO0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
+> [!VIDEO https://www.youtube-nocookie.com/embed/eu1P_vLTZO0]
## Requirements for using Code Optimizations
azure-monitor Resource Manager Sql Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/resource-manager-sql-insights.md
Previously updated : 03/25/2021 Last updated : 08/09/2023 # Resource Manager template samples for SQL Insights (preview)
The following sample creates an alert rule that will cover the SQL resources wit
The parameter file has values from one of the alert templates we provide in SQL Insights, you can modify it to alert on other data we collect for SQL. The template does not specify an action group for the alert rule.
-#### Template file
+### Template file
View the [template file on git hub](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/Alerts/log-metric-noag.armtemplate).
azure-monitor Aiops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/aiops-machine-learning.md
This table describes each step and provides high-level guidance and some example
|Step|Description|Query data in Azure Monitor Logs|Export data| |-|-|-|-| |**Explore data**| Examine and understand the data you've collected. |The simplest way to explore your data is using [Log Analytics](../logs/log-analytics-tutorial.md), which provides a rich set of tools for exploring and visualizing data in the Azure portal. You can also [analyze data in Azure Monitor Logs using a notebook](../logs/notebooks-azure-monitor-logs.md).|To analyze logs outside of Azure Monitor, [export data out of your Log Analytics workspace](../logs/logs-data-export.md) and set up the environment in the service you choose.<br>For an example of how to explore logs outside of Azure Monitor, see [Analyze data exported from Log Analytics using Synapse](https://techcommunity.microsoft.com/t5/azure-observability-blog/how-to-analyze-data-exported-from-log-analytics-data-using/ba-p/2547888).|
-|**Build and training a machine learning model**|Model training is an iterative process. Researchers or data scientists develop a model by fetching and cleaning the training data, engineer features, trying various models and tuning parameters, and repeating this cycle until the model is accurate and robust.|For small to medium-sized datasets, you typically use single-node machine learning libraries, like [Scikit Learn](https://scikit-learn.org/stable/).<br> For an example of how to train a machine learning model on data in Azure Monitor Logs using the Scikit Learn library, see this [sample notebook: Detect anomalies in Azure Monitor Logs using machine learning techniques](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/notebooks/sample_machine_learning_sklearn.ipynb).|For large datasets, you typically use big data machine learning libraries, likeΓÇ»[SynapseML](/azure/synapse-analytics/machine-learning/synapse-machine-learning-library).<br>For examples of how to train a machine learning model on data you export out of Azure Monitor Logs, see [SynapseML examples](https://microsoft.github.io/SynapseML/docs/about/#examples).|
-|**Deploy and score a model**|Scoring is the process of applying a machine learning model on new data to get predictions. Scoring usually needs to be done at scale with minimal latency.|To query new data in Azure Monitor Logs, use [Azure Monitor Query client library](/python/api/overview/azure/monitor-query-readme).<br>For an example of how to score data using open source tools, see this [sample notebook: Detect anomalies in Azure Monitor Logs using machine learning techniques](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/notebooks/sample_machine_learning_sklearn.ipynb).|For examples of how to score new data you export out of Azure Monitor Logs, see [SynapseML examples](https://microsoft.github.io/SynapseML/docs/about/#examples).|
+|**Build and training a machine learning model**|Model training is an iterative process. Researchers or data scientists develop a model by fetching and cleaning the training data, engineer features, trying various models and tuning parameters, and repeating this cycle until the model is accurate and robust.|For small to medium-sized datasets, you typically use single-node machine learning libraries, like [Scikit Learn](https://scikit-learn.org/stable/).<br> For an example of how to train a machine learning model on data in Azure Monitor Logs using the Scikit Learn library, see this [sample notebook: Detect anomalies in Azure Monitor Logs using machine learning techniques](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/notebooks/sample_machine_learning_sklearn.ipynb).|For large datasets, you typically use big data machine learning libraries, likeΓÇ»[SynapseML](/azure/synapse-analytics/machine-learning/synapse-machine-learning-library).|
+|**Deploy and score a model**|Scoring is the process of applying a machine learning model on new data to get predictions. Scoring usually needs to be done at scale with minimal latency.|To query new data in Azure Monitor Logs, use [Azure Monitor Query client library](/python/api/overview/azure/monitor-query-readme).<br>For an example of how to score data using open source tools, see this [sample notebook: Detect anomalies in Azure Monitor Logs using machine learning techniques](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/notebooks/sample_machine_learning_sklearn.ipynb).| |
|**Run your pipeline on schedule**| Automate your pipeline to retrain your model regularly on current data.| Schedule your machine learning pipeline with [Azure Synapse Analytics](/azure/synapse-analytics/synapse-notebook-activity) or [Azure Machine Learning](../../machine-learning/how-to-schedule-pipeline-job.md).|See the examples in the *Query data in Azure Monitor Logs* column. | Ingesting scored results to a Log Analytics workspace lets you use the data to get advanced insights, and to create alerts and dashboards. For an example of how to ingest scored results using [Azure Monitor Ingestion client library](/python/api/overview/azure/monitor-ingestion-readme), see [Ingest anomalies into a custom table in your Log Analytics workspace](../logs/notebooks-azure-monitor-logs.md#4-ingest-analyzed-data-into-a-custom-table-in-your-log-analytics-workspace-optional).
azure-monitor Azure Ad Authentication Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-ad-authentication-logs.md
These options might be cumbersome and pose a risk because it's difficult to mana
To enable Azure AD integration for Azure Monitor Logs and remove reliance on these shared secrets:
-1. [Migrate to Azure Monitor Agent](../agents/azure-monitor-agent-migration.md) from the Log Analytics agents. Azure Monitor Agent doesn't require any keys but instead [requires a system-managed identity](../agents/azure-monitor-agent-overview.md#security).
-1. [Disable local authentication for Log Analytics workspaces](#disable-local-authentication-for-log-analytics).
+1. [Disable local authentication for Log Analytics workspaces](#disable-local-authentication-for-log-analytics-workspaces).
1. Ensure that only authenticated telemetry is ingested in your Application Insights resources with [Azure AD authentication for Application Insights (preview)](../app/azure-ad-authentication.md).
-## Disable local authentication for Log Analytics
+## Prerequisites
+
+- [Migrate to Azure Monitor Agent](../agents/azure-monitor-agent-migration.md) from the Log Analytics agents. Azure Monitor Agent doesn't require any keys but instead [requires a system-managed identity](../agents/azure-monitor-agent-overview.md#security).
+- [Migrate to the Log Ingestion API](./custom-logs-migrate.md) from the HTTP Data Collector API to send data to Azure Monitor Logs.
+
+## Permissions required
+
+To disable local authentication for a Log Analytics workspace, you need `microsoft.operationalinsights/workspaces/write` permissions on the workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
+
+## Disable local authentication for Log Analytics workspaces
-After you've removed your reliance on the Log Analytics agent, you can disable local authentication for Log Analytics workspaces. Then you can ingest and query telemetry authenticated exclusively by Azure AD.
Disabling local authentication might limit the availability of some functionality, specifically:
Disabling local authentication might limit the availability of some functionalit
You can disable local authentication by using Azure Policy. Or you can disable it programmatically through an Azure Resource Manager template, PowerShell, or the Azure CLI.
-### Azure Policy
+### [Azure Policy](#tab/azure-policy)
Azure Policy for `DisableLocalAuth` won't allow you to create a new Log Analytics workspace unless this property is set to `true`. The policy name is `Log Analytics Workspaces should block non-Azure Active Directory based ingestion`. To apply this policy definition to your subscription, [create a new policy assignment and assign the policy](../../governance/policy/assign-policy-portal.md).
The policy template definition:
} ```
-### Azure Resource Manager
+### [Azure Resource Manager](#tab/azure-resource-manager)
The `DisableLocalAuth` property is used to disable any local authentication on your Log Analytics workspace. When set to `true`, this property enforces that Azure AD authentication must be used for all access.
Use the following Azure Resource Manager template to disable local authenticatio
```
-### Azure CLI
+### [Azure CLI](#tab/azure-cli)
The `DisableLocalAuth` property is used to disable any local authentication on your Log Analytics workspace. When set to `true`, this property enforces that Azure AD authentication must be used for all access.
Use the following Azure CLI commands to disable local authentication:
az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]--api-version "2021-06-01" --set properties.features.disableLocalAuth=True ```
-### PowerShell
+### [PowerShell](#tab/powershell)
The `DisableLocalAuth` property is used to disable any local authentication on your Log Analytics workspace. When set to `true`, this property enforces that Azure AD authentication must be used for all access.
Use the following PowerShell commands to disable local authentication:
$workspace | Set-AzResource -Force ``` ++ ## Next steps See [Azure AD authentication for Application Insights (preview)](../app/azure-ad-authentication.md).
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
Title: Managing Azure Monitor Logs in Azure CLI description: Learn how to use Azure CLI commands to manage a workspace in Azure Monitor Logs, including how workspaces interact with other Azure services. -- Previously updated : 08/16/2021 Last updated : 08/09/2023
Use the Azure CLI commands described here to manage your log analytics workspace in Azure Monitor. - [!INCLUDE [Prepare your Azure CLI environment](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Create a workspace for Monitor Logs
In the delete command, add the `--force` parameter to delete the workspace immed
## Next steps
-[Overview of Log Analytics in Azure Monitor](log-analytics-overview.md)
+[Overview of Log Analytics in Azure Monitor](log-analytics-overview.md)
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
description: Cost details for data stored in a Log Analytics workspace in Azure
Last updated 06/23/2023
-ms.reviwer: dalek git
+ms.reviwer: dalek git
-
+ # Azure Monitor Logs cost calculations and options The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs.
Billing for the commitment tiers is done per workspace on a daily basis. If the
Azure Commitment Discounts, such as discounts received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise), are applied to Azure Monitor Logs commitment-tier pricing just as they are to pay-as-you-go pricing. Discounts are applied whether the usage is being billed per workspace or per dedicated cluster. > [!TIP]
-> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of your monthly charges at each commitment level. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs).
+> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of what your data ingestion charges would be at each commitment level to help you choose the optimal commitment tier for your data ingestion patterns. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs). To review your actual charges, use [Azure Cost Management = Billing](../usage-estimated-costs.md#azure-cost-management--billing).
## Dedicated clusters
This query isn't an exact replication of how usage is calculated, but it provide
- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected. - See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that might be ingested in a workspace each day. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.+
azure-monitor Create Pipeline Datacollector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-pipeline-datacollector-api.md
- Title: Use Data Collector API to create a data pipeline
-description: You can use the Azure Monitor HTTP Data Collector API to add POST JSON data to the Log Analytics workspace from any client that can call the REST API. This article describes how to upload data stored in files in an automated way.
---- Previously updated : 08/09/2018--
-# Create a data pipeline with the Data Collector API
-
-The [Azure Monitor Data Collector API](data-collector-api.md) allows you to import any custom log data into a Log Analytics workspace in Azure Monitor. The only requirements are that the data be JSON-formatted and split into 30 MB or less segments. This is a completely flexible mechanism that can be plugged into in many ways: from data being sent directly from your application, to one-off adhoc uploads. This article will outline some starting points for a common scenario: the need to upload data stored in files on a regular, automated basis. While the pipeline presented here will not be the most performant or otherwise optimized, it is intended to serve as a starting point towards building a production pipeline of your own.
--
-## Example problem
-For the remainder of this article, we will examine page view data in Application Insights. In our hypothetical scenario, we want to correlate geographical information collected by default by the Application Insights SDK to custom data containing the population of every country/region in the world, with the goal of identifying where we should be spending the most marketing dollars.
-
-We use a public data source such as the [UN World Population Prospects](https://esa.un.org/unpd/wpp/) for this purpose. The data will have the following simple schema:
-
-![Example simple schema](./media/create-pipeline-datacollector-api/example-simple-schema-01.png)
-
-In our example, we assume that we will upload a new file with the latest yearΓÇÖs data as soon as it becomes available.
-
-## General design
-We are using a classic ETL-type logic to design our pipeline. The architecture will look as follows:
-
-![Data collection pipeline architecture](./media/create-pipeline-datacollector-api/data-pipeline-dataflow-architecture.png)
-
-This article will not cover how to create data or [upload it to an Azure Blob Storage account](../../storage/blobs/blob-upload-function-trigger.md). Rather, we pick the flow up as soon as a new file is uploaded to the blob. From here:
-
-1. A process will detect that new data has been uploaded. Our example uses an [logic app workflow](../../logic-apps/logic-apps-overview.md), which has available a trigger to detect new data being uploaded to a blob.
-
-2. A processor reads this new data and converts it to JSON, the format required by Azure Monitor In this example, we use an [Azure Function](../../azure-functions/functions-overview.md) as a lightweight, cost-efficient way of executing our processing code. The function is kicked off by the same logic app workflow that we used to detect the new data.
-
-3. Finally, once the JSON object is available, it is sent to Azure Monitor. The same logic app workflow sends the data to Azure Monitor using the built in Log Analytics Data Collector activity.
-
-While the detailed setup of the blob storage, logic app workflow, or Azure Function is not outlined in this article, detailed instructions are available on the specific productsΓÇÖ pages.
-
-To monitor this pipeline, we use Application Insights to [monitor our Azure Function](../../azure-functions/functions-monitoring.md), and Azure Monitor to [monitor our logic app workflow](../../logic-apps/monitor-workflows-collect-diagnostic-data.md).
-
-## Setting up the pipeline
-To set the pipeline, first make sure you have your blob container created and configured. Likewise, make sure that the Log Analytics workspace where youΓÇÖd like to send the data to is created.
-
-## Ingesting JSON data
-Ingesting JSON data is trivial with Azure Logic Apps, and since no transformation needs to take place, we can encase the entire pipeline in a single logic app workflow. Once both the blob container and the Log Analytics workspace have been configured, create a new logic app workflow and configure it as follows:
-
-![Logic apps workflow example](./media/create-pipeline-datacollector-api/logic-apps-workflow-example-01.png)
-
-Save your logic app workflow and proceed to test it.
-
-## Ingesting XML, CSV, or other formats of data
-monitor-workflows-collect-diagnostic-data today does not have built-in capabilities to easily transform XML, CSV, or other types into JSON format. Therefore, we need to use another means to complete this transformation. For this article, we use the serverless compute capabilities of Azure Functions as a very lightweight and cost-friendly way of doing so.
-
-In this example, we parse a CSV file, but any other file type can be similarly processed. Simply modify the deserializing portion of the Azure Function to reflect the correct logic for your specific data type.
-
-1. Create a new Azure Function, using the Function runtime v1 and consumption-based when prompted. Select the **HTTP trigger** template targeted at C# as a starting point that configures your bindings as we require.
-2. From the **View Files** tab on the right pane, create a new file called **project.json** and paste the following code from NuGet packages that we are using:
-
- ![Azure Functions example project](./media/create-pipeline-datacollector-api/functions-example-project-01.png)
-
- ```json
- {
- "frameworks": {
- "net46":{
- "dependencies": {
- "CsvHelper": "7.1.1",
- "Newtonsoft.Json": "11.0.2"
- }
- }
- }
- }
- ```
-
-3. Switch to **run.csx** from the right pane, and replace the default code with the following.
-
- >[!NOTE]
- >For your project, you have to replace the record model (the ΓÇ£PopulationRecordΓÇ¥ class) with your own data schema.
- >
-
- ```
- using System.Net;
- using Newtonsoft.Json;
- using CsvHelper;
-
- class PopulationRecord
- {
- public String Location { get; set; }
- public int Time { get; set; }
- public long Population { get; set; }
- }
-
- public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
- {
- string filePath = await req.Content.ReadAsStringAsync(); //get the CSV URI being passed from logic app workflow
- string response = "";
-
- //get a stream from blob
- WebClient wc = new WebClient();
- Stream s = wc.OpenRead(filePath);
-
- //read the stream
- using (var sr = new StreamReader(s))
- {
- var csvReader = new CsvReader(sr);
-
- var records = csvReader.GetRecords<PopulationRecord>(); //deserialize the CSV stream as an IEnumerable
-
- response = JsonConvert.SerializeObject(records); //serialize the IEnumerable back into JSON
- }
-
- return response == null
- ? req.CreateResponse(HttpStatusCode.BadRequest, "There was an issue getting data")
- : req.CreateResponse(HttpStatusCode.OK, response);
- }
- ```
-
-4. Save your function.
-5. Test the function to make sure the code is working correctly. Switch to the **Test** tab in the right pane, configuring the test as follows. Place a link to a blob with sample data into the **Request body** textbox. After clicking **Run**, you should see JSON output in the **Output** box:
-
- ![Function Apps test code](./media/create-pipeline-datacollector-api/functions-test-01.png)
-
-Now we need to go back and modify the logic app we started building earlier to include the data ingested and converted to JSON format. Using View Designer, configure as follows and then save your logic app:
-
-![Azure Logic Apps workflow complete example](./media/create-pipeline-datacollector-api/logic-apps-workflow-example-02.png)
-
-## Testing the pipeline
-Now you can upload a new file to the blob configured earlier and have it monitored by your logic app workflow. Soon, you should see a new instance of the logic app workflow kick off, call out to your Azure Function, and then successfully send the data to Azure Monitor.
-
->[!NOTE]
->It can take up to 30 minutes for the data to appear in Azure Monitor the first time you send a new data type.
--
-## Correlating with other data in Log Analytics and Application Insights
-To complete our goal of correlating Application Insights page view data with the population data we ingested from our custom data source, run the following query from either your Application Insights Analytics window or Log Analytics workspace:
-
-``` KQL
-app("fabrikamprod").pageViews
-| summarize numUsers = count() by client_CountryOrRegion
-| join kind=leftouter (
- workspace("customdatademo").Population_CL
-) on $left.client_CountryOrRegion == $right.Location_s
-| project client_CountryOrRegion, numUsers, Population_d
-```
-
-The output should show the two data sources now joined.
-
-![Correlating disjoined data in a search result example](./media/create-pipeline-datacollector-api/correlating-disjoined-data-example-01.png)
-
-## Suggested improvements for a production pipeline
-This article presented a working prototype, the logic behind which can be applied towards a true production-quality solution. For such a production-quality solution, the following improvements are recommended:
-
-* Add error handling and retry logic in your logic app workflow and Function.
-* Add logic to ensure that the 30MB/single Log Analytics Ingestion API call limit is not exceeded. Split the data into smaller segments if needed.
-* Set up a clean-up policy on your blob storage. Once successfully sent to the Log Analytics workspace, unless youΓÇÖd like to keep the raw data available for archival purposes, there is no reason to continue storing it.
-* Verify monitoring is enabled across the full pipeline, adding trace points and alerts as appropriate.
-* Leverage source control to manage the code for your function and logic app workflow.
-* Ensure that a proper change management policy is followed, such that if the schema changes, the function and logic app are modified accordingly.
-* If you are uploading multiple different data types, segregate them into individual folders within your blob container, and create logic to fan the logic out based on the data type.
--
-## Next steps
-Learn more about the [Data Collector API](data-collector-api.md) to write data to Log Analytics workspace from any REST API client.
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
description: You can use the Azure Monitor HTTP Data Collector API to add POST J
Previously updated : 07/14/2022 Last updated : 08/08/2023
azure-monitor Log Analytics Workspace Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md
Azure Service Health monitors:
- [Resource health](../../service-health/resource-health-overview.md): information about the health of your individual cloud resources, such as a specific Log Analytics workspace. - [Service health](../../service-health/service-health-overview.md): information about the health of the Azure services and regions you're using, which might affect your Log Analytics workspace, including communications about outages, planned maintenance activities, and other health advisories.
+## Permissions required
+
+- To view Log Analytics workspace health, you need `*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example.
+- To set up health status alerts, you need `Microsoft.Insights/ActivityLogAlerts/Write` permissions to the Log Analytics workspace, as provided by the [Monitoring Contributor built-in role](../roles-permissions-security.md#monitoring-contributor), for example.
+ ## View Log Analytics workspace health and set up health status alerts When Azure Service Health detects [average latency](../logs/data-ingestion-time.md#average-latency) in your Log Analytics workspace, the workspace resource health status is **Available**.
To view Log Analytics workspace health metrics:
To investigate Log Analytics workspace health issues: - Use [Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md), which provides a unified view of your workspace usage, performance, health, agent, queries, and change log.-- Query the data in your Log Analytics workspace to [understand which factors are contributing greater than expected latency in your workspace](../logs/data-ingestion-time.md).
+- [Query](./queries.md) the data in your Log Analytics workspace to [understand which factors are contributing greater than expected latency in your workspace](../logs/data-ingestion-time.md).
- [Use the `_LogOperation` function to view and set up alerts about operational issues](../logs/monitor-workspace.md) logged in your Log Analytics workspace.
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
# Manage access to Log Analytics workspaces
- The data in a Log Analytics workspace that you can access is determined by a combination of the following factors:
+The factors that determine which data you can access in a Log Analytics workspace are:
- The settings on the workspace itself.-- The access to resources sending data to the workspace.
+- Your access permissions to resources that send data to the workspace.
- The method used to access the workspace.
-This article describes how access is managed and how to perform any required configuration.
+This article describes how to manage access to data in a Log Analytics workspace.
## Overview
The Log Analytics Contributor role includes the following Azure actions:
### Resource permissions
-When users query logs from a workspace by using [resource-context access](#access-mode), they'll have the following permissions on the resource:
+To read data from or send data to a workspace in the [resource context](#access-mode), you need these permissions on the resource:
| Permission | Description | | - | -- |
In addition to using the built-in roles for a Log Analytics workspace, you can c
## Set table-level read access
-Table-level access allows you to let specific people read data only from a specific set of tables. It applies both for workspace-context and resource-context. There are two methods to define table-level permissions:
-* By assigning permissions to the table sub-resource under the workspace resource - this is the recommended method that is described in this section. This method is currently in **preview**.
-* By assigning special actions that contain table name to the workspace resource - this is the legacy method that is described in the next section. It has some limitations around custom log tables.
-
-Table-level RBAC is applied during query execution. It does not apply to metadata retrieval calls. For that reason, tables will appear in the list of tables even if they are not available to the user.
+Table-level access settings let you grant specific users or groups read-only permission to data from certain tables. Users with table-level read access can read data from the specified tables in both the workspace and the resource context.
> [!NOTE]
-> The recommended table-level access method described here does not apply during preview to Microsoft Sentinel Detection Rules. These rules might have access to more tables than intended.
-
-In order to apply table-level RBAC for a user, two assignments shall be made:
-
-1. Assign the user the ability to read the workspace details and to run a query without granting the ability to run a query on tables. This is done by assigning a special custom role on the workspace that has only the following actions:
- - `Microsoft.OperationalInsights/workspaces/read`
- - `Microsoft.OperationalInsights/workspaces/query/read`
- - `Microsoft.OperationalInsights/workspaces/analytics/query/action`
- - `Microsoft.OperationalInsights/workspaces/search/action`
-
-2. Assign the user a read permissions on the specific table sub-resource. Any role that has */read will be sufficient such as **Reader** role or **Log Analytics Reader** role. As table is a sub-resource of workspace, the workspace admins can also perform action on a specific table.
-
-> [!WARNING]
-> If the user has other assignments on the workspace, directly or via inheritence (e.g. user has Reader on the subscription that contains the workspace), the user will be able to access all tables in the workspace.
+> We recommend using the method described here, which is currently in **preview**, to define table-level access. Alternatively, you can use the [legacy method of setting table-level read access](#legacy-method-of-setting-table-level-read-access), which has some limitations related to custom log tables. During preview, the recommended method described here does not apply to Microsoft Sentinel Detection Rules, which might have access to more tables than intended. Before using either method, see [Table-level access considerations and limitations](#table-level-access-considerations-and-limitations).
+Granting table-level read access involves assigning a user two roles:
+- At the workspace level - a custom role that provides limited permissions to read workspace details and run a query in the workspace, but not to read data from any tables.
+- At the table level - a **Reader** role, scoped to the specific table.
-To create a [custom role](../../role-based-access-control/custom-roles.md) that lets specific users or groups read data from specific tables in a workspace:
+**To grant a user or group limited permissions to the Log Analytics workspace:**
-1. Create a custom role that grants users permission to execute queries in the Log Analytics workspace, based on the built-in Azure Monitor Logs **Reader** role:
+1. Create a [custom role](../../role-based-access-control/custom-roles.md) at the workspace level to let users read workspace details and run a query in the workspace, without providing read access to data in any tables:
1. Navigate to your workspace and select **Access control (IAM)** > **Roles**.
To create a [custom role](../../role-based-access-control/custom-roles.md) that
This opens the **Create a custom role** screen.
- 1. On the **Basics** tab of the screen, enter a **Custom role name** value and, optionally, provide a description.
+ 1. On the **Basics** tab of the screen:
+ 1. Enter a **Custom role name** value and, optionally, provide a description.
+ 1. Set **Baseline permissions** to **Start from scratch**.
:::image type="content" source="media/manage-access/manage-access-create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role screen with the Custom role name and Description fields highlighted." lightbox="media/manage-access/manage-access-create-custom-role.png":::
- 1. Select the **JSON** tab > **Edit**::
+ 1. Select the **JSON** tab > **Edit**:
- 1. In the `"actions"` section, add:
+ 1. In the `"actions"` section, add these actions:
- - `Microsoft.OperationalInsights/workspaces/read`
- - `Microsoft.OperationalInsights/workspaces/query/read`
- - `Microsoft.OperationalInsights/workspaces/analytics/query/action`
- - `Microsoft.OperationalInsights/workspaces/search/action`
+ ```json
+ "Microsoft.OperationalInsights/workspaces/read",
+ "Microsoft.OperationalInsights/workspaces/query/read",
+ "Microsoft.OperationalInsights/workspaces/analytics/query/action",
+ "Microsoft.OperationalInsights/workspaces/search/action"
+ ```
- 1. In the `"not actions"` section, add `Microsoft.OperationalInsights/workspaces/sharedKeys/read`.
+ 1. In the `"not actions"` section, add:
+
+ ```json
+ "Microsoft.OperationalInsights/workspaces/sharedKeys/read"
+ ```
:::image type="content" source="media/manage-access/manage-access-create-custom-role-json.png" alt-text="Screenshot that shows the JSON tab of the Create a custom role screen with the actions section of the JSON file highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png"::: 1. Select **Save** > **Review + Create** at the bottom of the screen, and then **Create** on the next page.
-1. Assign your custom role to the relevant users or groups:
+1. Assign your custom role to the relevant user:
1. Select **Access control (AIM)** > **Add** > **Add role assignment**. :::image type="content" source="media/manage-access/manage-access-add-role-assignment-button.png" alt-text="Screenshot that shows the Access control screen with the Add role assignment button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-button.png":::
To create a [custom role](../../role-based-access-control/custom-roles.md) that
:::image type="content" source="media/manage-access/manage-access-add-role-assignment-select-members.png" alt-text="Screenshot that shows the Select members screen." lightbox="media/manage-access/manage-access-add-role-assignment-select-members.png":::
- 1. Search for and select the relevant user or group and click **Select**.
+ 1. Search for and select a user and click **Select**.
1. Select **Review and assign**.
+
+The user can now read workspace details and run a query, but can't read data from any tables.
+
+**To grant the user read access to a specific table:**
+
+1. From the **Log Analytics workspaces** menu, select **Tables**.
+1. Select the ellipsis ( **...** ) to the right of your table and select **Access control (IAM)**.
+
+ :::image type="content" source="media/manage-access/table-level-access-control.png" alt-text="Screenshot that shows the Log Analytics workspace table management screen with the table-level access control button highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
-1. Grant the users or groups read access to specific tables in a workspace by calling the `https://management.azure.com/batch?api-version=2020-06-01` POST API and sending the following details in the request body:
-
- ```json
- {
- "requests": [
- {
- "content": {
- "Id": "<GUID_1>",
- "Properties": {
- "PrincipalId": "<user_object_ID>",
- "PrincipalType": "User",
- "RoleDefinitionId": "/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7",
- "Scope": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>",
- "Condition": null,
- "ConditionVersion": null
- }
- },
- "httpMethod": "PUT",
- "name": "<GUID_2>",
- "requestHeaderDetails": {
- "commandName": "Microsoft_Azure_AD."
- },
- "url": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>/providers/Microsoft.Authorization/roleAssignments/<GUID_1>?api-version=2020-04-01-preview"
- }
- ]
- }
- ```
-
- Where:
- - You can generate a GUID for `<GUID 1>` and `<GUID 2>` using any GUID generator.
- - `<user_object_ID>` is the object ID of the user to which you want to grant table read access.
- - `<subscription_ID>` is the ID of the subscription related to the workspace.
- - `<resource_group_name>` is the resource group of the workspace.
- - `<workspace_name>` is the name of the workspace.
- - `<table_name>` is the name of the table to which you want to assign the user or group permission to read data from.
+1. On the **Access control (IAM)** screen, select **Add** > **Add role assignment**.
+1. Select the **Reader** role and select **Next**.
+1. Click **+ Select members** to open the **Select members** screen.
+1. Search for and select the user and click **Select**.
+1. Select **Review and assign**.
+The user can now read data from this specific table. Grant the user read access to other tables in the workspace, as needed.
+
### Legacy method of setting table-level read access The legacy method of table-level also uses [Azure custom roles](../../role-based-access-control/custom-roles.md) to let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
To define access to a particular table, create a [custom role](../../role-based-
* Use `Microsoft.OperationalInsights/workspaces/query/*` to grant access to all tables. * To exclude access to specific tables when you use a wildcard in **Actions**, list the tables excluded tables in the **NotActions** section of the role definition.
-#### Examples
- Here are examples of custom role actions to grant and deny access to specific tables. Grant access to the _Heartbeat_ and _AzureActivity_ tables:
Grant access to all tables except the _SecurityAlert_ table:
], ```
-#### Custom tables
+#### Limitations of the legacy method related to custom tables
Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information).
-> [!NOTE]
-> Tables created by the [Logs ingestion API](../essentials/../logs/logs-ingestion-api-overview.md) don't yet support table-level RBAC.
- Using the legacy method of table-level access, you can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions: ```
Using the legacy method of table-level access, you can't grant access to individ
], ```
-### Considerations regarding table-level access
+### Table-level access considerations and limitations
-- If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data.-- If a user is granted per-table access but no other permissions, they can access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.-- Administrators and owners of the subscription will have access to all data types regardless of any other permission settings.
+- In the Log Analytics UI, users with table-level can see the list of all tables in the workspace, but can only retrieve data from tables to which they have access.
+- The standard Reader or Contributor roles, which include the _\*/read_ action, override table-level access control and give users access to all log data.
+- A user with table-level access but no workspace-level permissions can access log data from the API but not from the Azure portal.
+- Administrators and owners of the subscription have access to all data types regardless of any other permission settings.
- Workspace owners are treated like any other user for per-table access control. - Assign roles to security groups instead of individual users to reduce the number of assignments. This practice will also help you use existing group management tools to configure and verify access.
azure-monitor Resource Manager Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-manager-workspace.md
Previously updated : 07/30/2023 Last updated : 08/08/2023 # Resource Manager template samples for Log Analytics workspaces in Azure Monitor
resource table 'Microsoft.OperationalInsights/workspaces/tables@2021-12-01-previ
} ```
-## Collect Windows events
+## Configure data collection for Log Analytics workspace
+The following samples show how to configure a Log Analytics workspace to collect data from the [Log Analytics agent](../agents/log-analytics-agent.md), which is on a deprecation path being replaced by [Azure Monitor agent](../agents/agents-overview.md). The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to define its data collection and will ignore any of the configuration performed by these samples. For sample templates for data collection rules, see [Resource Manager template samples for data collection rules in Azure Monitor](../agents/resource-manager-data-collection-rules.md).
+
+### Collect Windows events
The following sample adds collection of [Windows events](../agents/data-sources-windows-events.md) to an existing workspace.
-### Notes
+#### Notes
- Add a **datasources** element for each event log to collect. You can specify different set of event types for each log.
-### Template file
+#### Template file
# [Bicep](#tab/bicep)
resource WindowsEventApplicationDataSource 'Microsoft.OperationalInsights/worksp
-### Parameter file
+#### Parameter file
```json {
resource WindowsEventApplicationDataSource 'Microsoft.OperationalInsights/worksp
} ```
-## Collect syslog
+### Collect syslog
The following sample adds collection of [syslog events](../agents/data-sources-syslog.md) to an existing workspace.
-### Notes
+#### Notes
- Add a **datasources** element for each facility to collect. You can specify different set of severities for each facility.
-### Template file
+#### Template file
# [Bicep](#tab/bicep)
resource syslogCollectionDataSource 'Microsoft.OperationalInsights/workspaces/da
-### Parameter file
+#### Parameter file
```json {
resource syslogCollectionDataSource 'Microsoft.OperationalInsights/workspaces/da
} ```
-## Collect Windows performance counters
+### Collect Windows performance counters
The following sample adds collection of [Windows performance counters](../agents/data-sources-performance-counters.md) to an existing workspace.
-### Notes
+#### Notes
- Add a **datasources** element for each counter and instance to collect. You can specify different collection rate for each counter and instance combination.
-### Template file
+#### Template file
# [Bicep](#tab/bicep)
resource windowsPerfProcessorPercentageDataSource 'Microsoft.OperationalInsights
-### Parameter file
+#### Parameter file
```json {
resource windowsPerfProcessorPercentageDataSource 'Microsoft.OperationalInsights
} ```
-## Collect Linux performance counters
+### Collect Linux performance counters
The following sample adds collection of [Linux performance counters](../agents/data-sources-performance-counters.md) to an existing workspace.
-### Notes
+#### Notes
- Add a **datasources** element for each object and instance to collect. You can specify different set of counters for each object and instance combination, but you can only specify a single rate for all counters.
-### Template file
+#### Template file
# [Bicep](#tab/bicep)
resource linuxPerformanceProcessorDataSource 'Microsoft.OperationalInsights/work
-### Parameter file
+#### Parameter file
```json {
resource linuxPerformanceProcessorDataSource 'Microsoft.OperationalInsights/work
} ```
-## Collect custom logs
+### Collect text logs
-The following sample adds collection of [custom logs](../agents/data-sources-custom-logs.md) to an existing workspace.
+The following sample adds collection of [text logs](../agents/data-sources-custom-logs.md) to an existing workspace.
-### Notes
+#### Notes
-- The configuration of delimiters and extractions can be complex. For help, you can define a custom log using the Azure portal and the retrieve its configuration using [Get-AzOperationalInsightsDataSource](/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource) with **-Kind** set to **CustomLog**.
+- The configuration of delimiters and extractions can be complex. For help, you can define a text log using the Azure portal and the retrieve its configuration using [Get-AzOperationalInsightsDataSource](/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource) with **-Kind** set to **CustomLog**.
-### Template file
+#### Template file
# [Bicep](#tab/bicep)
resource armlogNewlineDatasource 'Microsoft.OperationalInsights/workspaces/dataS
-### Parameter file
+#### Parameter file
```json {
resource armlogNewlineDatasource 'Microsoft.OperationalInsights/workspaces/dataS
} ```
-## Collect IIS log
+### Collect IIS log
The following sample adds collection of [IIS logs](../agents/data-sources-iis-logs.md) to an existing workspace.
-### Template file
+#### Template file
# [Bicep](#tab/bicep)
resource IISLogDataSource 'Microsoft.OperationalInsights/workspaces/datasources@
-### Parameter file
+#### Parameter file
```json {
azure-monitor Set Up Logs Ingestion Api Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/set-up-logs-ingestion-api-prerequisites.md
The script also grants the app `Contributor` permissions to:
## PowerShell script + ```powershell # # Prerequisite functions
$VerbosePreference = "SilentlyContinue" # "Continue"
"Directory.AccessAsUser.All", "RoleManagement.ReadWrite.Directory" )
- Connect-MgGraph -TenantId $TenantId -ForceRefresh -Scopes $MgScope
+ Connect-MgGraph -TenantId $TenantId -Scopes $MgScope
#- # (3) Prerequisites - deployment of environment (if missing)
$VerbosePreference = "SilentlyContinue" # "Continue"
Write-Output "" Write-Output "-" ```
-
+ ## Next steps - [Learn more about data collection rules](../essentials/data-collection-rule-overview.md) - [Learn more about writing transformation queries](../essentials//data-collection-transformations.md)+
azure-monitor Unify App Resource Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/unify-app-resource-data.md
- Title: Unify multiple Azure Monitor Application Insights resources | Microsoft Docs
-description: This article provides details on how to use a function in Azure Monitor Logs to query multiple Application Insights resources and visualize that data.
----- Previously updated : 03/23/2022---
-# Unify multiple Azure Monitor Application Insights resources
-This article describes how to query and view all your Application Insights log data in one place, even when they are in different Azure subscriptions, as a replacement for the deprecation of the Application Insights Connector. The number of Application Insights resources that you can include in a single query is limited to 100.
-
-## Recommended approach to query multiple Application Insights resources
-Listing multiple Application Insights resources in a query can be cumbersome and difficult to maintain. Instead, you can leverage function to separate the query logic from the applications scoping.
-
-This example demonstrates how you can monitor multiple Application Insights resources and visualize the count of failed requests by application name.
-
-Create a function using union operator with the list of applications, then save the query in your workspace as function with the alias *applicationsScoping*.
-
-You can modify the listed applications at any time in the portal by navigating to Query explorer in your workspace and selecting the function for editing and then saving, or using the `SavedSearch` PowerShell cmdlet.
-
->[!NOTE]
->This method canΓÇÖt be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isnΓÇÖt supported. If you prefer to use function for resource scoping in log alerts, you need to edit the alert rule in the portal or with a Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query.
-
-The `withsource= SourceApp` command adds a column to the results that designates the application that sent the log. The parse operator is optional in this example and used to extract the application name from SourceApp property.
-
-```
-union withsource=SourceApp
-app('Contoso-app1').requests,
-app('Contoso-app2').requests,
-app('Contoso-app3').requests,
-app('Contoso-app4').requests,
-app('Contoso-app5').requests
-| parse SourceApp with * "('" applicationName "')" *
-```
-
-You are now ready to use applicationsScoping function in the cross-resource query:
-
-```
-applicationsScoping
-| where timestamp > ago(12h)
-| where success == 'False'
-| parse SourceApp with * '(' applicationName ')' *
-| summarize count() by applicationName, bin(timestamp, 1h)
-| render timechart
-```
-
-The query uses Application Insights schema, although the query is executed in the workspace since the applicationsScoping function returns the Application Insights data structure. The function alias returns the union of the requests from all the defined applications. The query then filters for failed requests and visualizes the trends by application.
-
-![Cross-query results example](media/unify-app-resource-data/app-insights-query-results.png)
-
->[!NOTE]
->[Cross-resource queries](../logs/cross-workspace-query.md) in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md). [See example templates](../alerts/alerts-log-create-templates.md).
-
-## Application Insights and Log Analytics workspace schema differences
-The following table shows the schema differences between Log Analytics and Application Insights.
-
-| Log Analytics workspace properties| Application Insights resource properties|
-|||
-| AnonUserId | user_id|
-| ApplicationId | appId|
-| ApplicationName | appName|
-| ApplicationTypeVersion | application_Version |
-| AvailabilityCount | itemCount |
-| AvailabilityDuration | duration |
-| AvailabilityMessage | message |
-| AvailabilityRunLocation | location |
-| AvailabilityTestId | id |
-| AvailabilityTestName | name |
-| AvailabilityTimestamp | timestamp |
-| Browser | client_browser |
-| City | client_city |
-| ClientIP | client_IP |
-| Computer | cloud_RoleInstance |
-| Country | client_CountryOrRegion |
-| CustomEventCount | itemCount |
-| CustomEventDimensions | customDimensions |
-| CustomEventName | name |
-| DeviceModel | client_Model |
-| DeviceType | client_Type |
-| ExceptionCount | itemCount |
-| ExceptionHandledAt | handledAt |
-| ExceptionMessage | message |
-| ExceptionType | type |
-| OperationID | operation_id |
-| OperationName | operation_Name |
-| OS | client_OS |
-| PageViewCount | itemCount |
-| PageViewDuration | duration |
-| PageViewName | name |
-| ParentOperationID | operation_Id |
-| RequestCount | itemCount |
-| RequestDuration | duration |
-| RequestID | id |
-| RequestName | name |
-| RequestSuccess | success |
-| ResponseCode | resultCode |
-| Role | cloud_RoleName |
-| RoleInstance | cloud_RoleInstance |
-| SessionId | session_Id |
-| SourceSystem | operation_SyntheticSource |
-| TelemetryTYpe | type |
-| URL | url |
-| UserAccountId | user_AccountId |
-
-## Next steps
-
-Use [Log Search](../logs/log-query-overview.md) to view detailed information for your Application Insights apps.
-
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing the use of resource types and tables to access Azure Monitor-related resources and properties. Previously updated : 07/07/2022 Last updated : 08/09/2023
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
Previously updated : 04/05/2022- Last updated : 08/09/2023 # Resource Manager template samples for Azure Monitor
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
Title: Roles, permissions, and security in Azure Monitor
description: Learn how to use roles and permissions in Azure Monitor to restrict access to monitoring resources. Previously updated : 11/27/2017 - Last updated : 08/09/2023 # Roles, permissions, and security in Azure Monitor -
-Many teams need to strictly regulate access to monitoring data and settings. For example, if you have team members who work exclusively on monitoring (support engineers, DevOps engineers) or if you use a managed service provider, you might want to grant them access to only monitoring data. You might want to restrict their ability to create, modify, or delete resources.
- This article shows how to quickly apply a built-in monitoring role to a user in Azure or build your own custom role for a user who needs limited monitoring permissions. The article then discusses security considerations for your Azure Monitor-related resources and how you can limit access to the data in those resources. ## Built-in monitoring roles
People assigned the Monitoring Reader role can view all monitoring data in a sub
* View monitoring dashboards in the Azure portal. * View alert rules defined in [Azure alerts](alerts/alerts-overview.md).
-* Query for metrics by using the [Azure Monitor REST API](/rest/api/monitor/metrics), [PowerShell cmdlets](powershell-samples.md), or [cross-platform CLI](cli-samples.md).
+* Query Azure Monitor Metrics by using the [Azure Monitor REST API](/rest/api/monitor/metrics), [PowerShell cmdlets](powershell-samples.md), or [cross-platform CLI](cli-samples.md).
* Query the Activity log by using the portal, Azure Monitor REST API, PowerShell cmdlets, or cross-platform CLI. * View the [diagnostic settings](essentials/diagnostic-settings.md) for a resource. * View the [log profile](essentials/activity-log.md#legacy-collection-methods) for a subscription. * View autoscale settings. * View alert activity and settings.
-* Access Application Insights data and view data in Application Insights Analytics.
* Search Log Analytics workspace data, including usage data for the workspace.
-* View management groups in Log Analytics.
-* Retrieve the search schema in a Log Analytics workspace.
-* List monitoring packs in a Log Analytics workspace.
-* Retrieve and execute saved searches in a Log Analytics workspace.
-* Retrieve the workspace storage configuration for Log Analytics.
+* Retrieve the table schemas in a Log Analytics workspace.
+* Retrieve and execute log queries in a Log Analytics workspace.
+* Access Application Insights data.
+ > [!NOTE] > This role doesn't give read access to log data that has been streamed to an event hub or stored in a storage account. For information on how to configure access to these resources, see the [Security considerations for monitoring data](#security-considerations-for-monitoring-data) section later in this article.
People assigned the Monitoring Contributor role can view all monitoring data in
This role is a superset of the Monitoring Reader role. It's appropriate for members of an organization's monitoring team or managed service providers who, in addition to the permissions mentioned earlier, need to: * View monitoring dashboards in the portal and create their own private monitoring dashboards.
-* Set [diagnostic settings](essentials/diagnostic-settings.md) for a resource.\*
-* Set the [log profile](essentials/activity-log.md#legacy-collection-methods) for a subscription.\*
-* Set alert rule activity and settings via [Azure alerts](alerts/alerts-overview.md).
-* Create web tests and components for Application Insights.
+* Create and edit [diagnostic settings](essentials/diagnostic-settings.md) for a resource. <sup>1</sup>
+* Set alert rule activity and settings using [Azure alerts](alerts/alerts-overview.md).
* List shared keys for a Log Analytics workspace.
-* Enable or disable monitoring packs in a Log Analytics workspace.
* Create, delete, and execute saved searches in a Log Analytics workspace. * Create and delete the workspace storage configuration for Log Analytics.
+* Create web tests and components for Application Insights. See [Resources, roles, and access control in Application Insights](app/resources-roles-access-control.md).
-\*To set a log profile or a diagnostic setting, users must also separately be granted ListKeys permission on the target resource (storage account or event hub namespace).
+<sup>1</sup> To create or edit a diagnostic setting, users must also separately be granted ListKeys permission on the target resource (storage account or event hub namespace).
> [!NOTE] > This role doesn't give read access to log data that has been streamed to an event hub or stored in a storage account. For information on how to configure access to these resources, see the [Security considerations for monitoring data](#security-considerations-for-monitoring-data) section later in this article.
If the preceding built-in roles don't meet the exact needs of your team, you can
| Microsoft.Insights/ExtendedDiagnosticSettings/[Read, Write, Delete] | Read, write, or delete diagnostic settings for network flow logs. | | Microsoft.Insights/LogDefinitions/Read |This permission is necessary for users who need access to the Activity log via the portal. | | Microsoft.Insights/LogProfiles/[Read, Write, Delete] |Read, write, or delete log profiles (streaming the Activity log to an event hub or storage account). |
-| Microsoft.Insights/MetricAlerts/[Read, Write, Delete] |Read, write, or delete near-real-time metric alerts. |
+| Microsoft.Insights/MetricAlerts/[Read, Write, Delete] |Read, write, or delete metric alert rules. |
| Microsoft.Insights/MetricDefinitions/Read |Read metric definitions (list of available metric types for a resource). | | Microsoft.Insights/Metrics/Read |Read metrics for a resource. | | Microsoft.Insights/Register/Action |Register the Azure Monitor resource provider. | | Microsoft.Insights/ScheduledQueryRules/[Read, Write, Delete] |Read, write, or delete log alerts in Azure Monitor. | > [!NOTE]
-> Access to alerts, diagnostic settings, and metrics for a resource requires that the user has read access to the resource type and scope of that resource. Creating (writing) a diagnostic setting or a log profile that archives to a storage account or streams to event hubs requires the user to also have ListKeys permission on the target resource.
+> Access to alerts, diagnostic settings, and metrics for a resource requires that the user has read access to the resource type and scope of that resource. Creating a diagnostic setting that sends data to a storage account or streams to event hubs requires the user to also have ListKeys permission on the target resource.
-For example, you can use the preceding table to create an Azure custom role for an Activity Log Reader like this:
+For example, you can use the preceding table to create an Azure custom role for an Activity Log Reader with the following:
```powershell $role = Get-AzRoleDefinition "Reader"
New-AzRoleDefinition -Role $role
## Security considerations for monitoring data
-Monitoring dataΓÇöparticularly log filesΓÇöcan contain sensitive information, such as IP addresses or user names. Monitoring data from Azure comes in three basic forms:
--- The Activity log describes all control-plane actions on your Azure subscription.-- Resource logs are logs emitted by a resource.-- Metrics are emitted by resources.-
-All these data types can be stored in a storage account or streamed to an event hub, both of which are general-purpose Azure resources. Because these are general-purpose resources, creating, deleting, and accessing them is a privileged operation reserved for an administrator. Use the following practices for monitoring-related resources to prevent misuse:
+[Data in Azure Monitor](data-platform.md) can be sent in a storage account or streamed to an event hub, both of which are general-purpose Azure resources. Because these are general-purpose resources, creating, deleting, and accessing them is a privileged operation reserved for an administrator. Since this data can contain sensitive information such as IP addresses or user names, use the following practices for monitoring-related resources to prevent misuse:
* Use a single, dedicated storage account for monitoring data. If you need to separate monitoring data into multiple storage accounts, never share usage of a storage account between monitoring and non-monitoring data. Sharing usage in that way might inadvertently give access to non-monitoring data to organizations that need access to only monitoring data. For example, a third-party organization for security information and event management should need only access to monitoring data. * Use a single, dedicated service bus or event hub namespace across all diagnostic settings for the same reason described in the previous point.
$token = New-AzStorageAccountSASToken -ResourceType Service -Service Blob -Permi
You can then give the token to the entity that needs to read from that storage account. The entity can list and read from all blobs in that storage account.
-Alternatively, if you need to control this permission with Azure RBAC, you can grant that entity the `Microsoft.Storage/storageAccounts/listkeys/action` permission on that particular storage account. This permission is necessary for users who need to set a diagnostic setting or a log profile to archive to a storage account. For example, you can create the following Azure custom role for a user or application that needs to read from only one storage account:
+Alternatively, if you need to control this permission with Azure RBAC, you can grant that entity the `Microsoft.Storage/storageAccounts/listkeys/action` permission on that particular storage account. This permission is necessary for users who need to set a diagnostic setting to send data to a storage account. For example, you can create the following Azure custom role for a user or application that needs to read from only one storage account:
```powershell $role = Get-AzRoleDefinition "Reader"
You can follow a similar pattern with event hubs, but first you need to create a
New-AzRoleDefinition -Role $role ```
-## Monitoring within a secured virtual network
-
-Azure Monitor needs access to your Azure resources to provide the services that you enable. If you want to monitor your Azure resources while still securing them from access to the public internet, you can use secured storage accounts.
-
-Monitoring data is often written to a storage account. You might want to make sure that unauthorized users can't access the data that's copied to a storage account. For extra security, you can lock down network access to give only your authorized resources and trusted Microsoft services access to a storage account by restricting a storage account to use selected networks.
-
-![Screenshot that shows the settings for firewalls and virtual networks.](./media/roles-permissions-security/secured-storage-example.png)
-
-Azure Monitor is considered a trusted Microsoft service. If you select the **Allow trusted Microsoft services to access this storage account** checkbox, Azure monitor will have access to your secured storage account. You then enable writing Azure Monitor resource logs, Activity log, and metrics to your storage account under these protected conditions. This setting will also enable Log Analytics to read logs from secured storage.
-
-For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
## Next steps
azure-monitor Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md
- Title: Azure Monitor terminology updates | Microsoft Docs
-description: This article describes terminology changes made to Azure monitoring services.
--- Previously updated : 06/07/2022----
-# Azure Monitor naming and terminology changes
-In Azure Monitor, different services were consolidated to simplify monitoring for Azure customers. This article describes name and terminology changes in Azure Monitor documentation.
-
-## October 2019: Diagnostic log to resource log
-"Diagnostic logs" changed to "resource logs" to better match what's actually being collected. The term "diagnostic settings" remains the same.
-
-## February 2019: Log Analytics terminology
-After the consolidation of different services under Azure Monitor, we modified the terminology. Now the documentation better describes the Azure Monitor service and its different components.
-
-### Log Analytics
-Azure Monitor log data is still stored in a Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We changed the term _Log Analytics_ in many places to _Azure Monitor logs_. This term better reflects its role in Azure Monitor. It also provides better consistency with [metrics in Azure Monitor](essentials/data-platform-metrics.md).
-
-The term _log analytics_ now primarily applies to the page in the Azure portal that's used to write and run queries and analyze log data. It's the functional equivalent of [metrics explorer](essentials/metrics-charts.md), which is the page in the Azure portal used to analyze metric data.
-
-### Log Analytics workspaces
-[Workspaces](logs/manage-access.md) that hold log data in Azure Monitor are still referred to as Log Analytics workspaces. The **Log Analytics** menu in the Azure portal was renamed to **Log Analytics workspaces**. It's where you [create new workspaces](logs/quick-create-workspace.md) and configure data sources. You analyze your logs and other monitoring data in **Azure Monitor** and configure your workspace in **Log Analytics workspaces**.
-
-### Management solutions
-[Management solutions](/previous-versions/azure/azure-monitor/insights/solutions) were renamed to _monitoring solutions_, which better describes their functionality.
-
-## August 2018: Consolidation of monitoring services into Azure Monitor
-Log Analytics and Application Insights were consolidated into Azure Monitor to provide a single integrated experience for monitoring Azure resources and hybrid environments. No functionality was removed from these services. You can perform the same scenarios with no loss or compromise of any features.
-
-Documentation for each of these services was consolidated into a single set of content for Azure Monitor. Now you can find all the content for a particular monitoring scenario in a single location instead of having to use multiple sets of content. As the consolidated service evolves, the content will become more integrated.
-
-Other features that were considered part of Log Analytics, such as agents and views, were also repositioned as features of Azure Monitor. Their functionality hasn't changed other than potential improvements to their experience in the Azure portal.
-
-## April 2018: Retirement of Operations Management Suite brand
-Operations Management Suite (OMS) bundled the following Azure management services for licensing purposes:
--- Application Insights-- Azure Automation-- Azure Backup-- Log Analytics-- Azure Site Recovery-
-[New pricing was introduced for these services](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/). The OMS bundling is no longer available for new customers. None of the services that were part of OMS have changed, except for the consolidation into Azure Monitor as described. The OMS portal was retired and is no longer available.
-
-## Next steps
-
-Read an [overview of Azure Monitor](overview.md) that describes its different components and features.
azure-monitor Tutorial Logs Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/tutorial-logs-dashboards.md
Title: Create and share dashboards of Azure Log Analytics data | Microsoft Docs description: This tutorial helps you understand how Log Analytics dashboards can visualize all of your saved log queries, giving you a single lens to view your environment. --++ Last updated 05/28/2020
azure-monitor Resource Manager Vminsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/resource-manager-vminsights.md
Title: Resource Manager template samples for VM insights
description: Sample Azure Resource Manager templates to deploy and configureVM insights. --++ Last updated 06/13/2022
azure-monitor Vminsights Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-change-analysis.md
Title: Change analysis in VM insights description: VM insights integration with Application Change Analysis integration allows you to view any changes made to a virtual machine that might have affected it performance. ++ Last updated 06/08/2022
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
VM insights is available for Azure Arc-enabled servers in regions where the Arc
## Supported operating systems
-VM insights supports any operating system that supports the Dependency agent and either the Azure Monitor agent (preview) or Log Analytics agent. For a complete list, see [Azure Monitor agent overview](../agents/agents-overview.md#supported-operating-systems).
+VM insights supports all operating systems supported by the Dependency agent and either Azure Monitor Agent or Log Analytics agent. For a complete list of operating systems supported by Azure Monitor Agent and Log Analytics agent, see [Azure Monitor agent overview](../agents/agents-overview.md#supported-operating-systems).
+
+Dependency Agent supports the same [Windows versions that Azure Monitor Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI.
+For Dependency Agent Linux support, see [Dependency Agent Linux support](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support).
> [!IMPORTANT] > If the Ethernet device for your virtual machine has more than nine characters, it won't be recognized by VM insights and data won't be sent to the InsightsMetrics table. The agent will collect data from [other sources](../agents/agent-data-sources.md).
The DCR is defined by the options in the following table.
| Option | Description | |:|:|
-| Guest performance | Specifies whether to collect performance data from the guest operating system. This option is required for all machines. The collection interval for performance data is every 60 seconds.|
+| Guest performance | Specifies whether to collect [performance data](https://learn.microsoft.com/azure/azure-monitor/vm/vminsights-performance) from the guest operating system. This option is required for all machines. The collection interval for performance data is every 60 seconds.|
| Processes and dependencies | Collects information about processes running on the virtual machine and dependencies between machines. This information enables the [Map feature in VM insights](vminsights-maps.md). This is optional and enables the [VM insights Map feature](vminsights-maps.md) for the machine. | | Log Analytics workspace | Workspace to store the data. Only workspaces with VM insights are listed. |
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
VM insights collects performance and connection metrics, computer and process in
> [!IMPORTANT] > If your virtual machine is using VM insights with Azure Monitor agent, then you must have [processes and dependencies enabled](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent) for these tables to be created.
-One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is added to VM insights. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
+One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is added to VM insights. The fields and values in the VMComputer table map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the VMProcess table map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The _ResourceId field matches the name field in the corresponding Resource Manager resource.
There are internally generated properties you can use to identify unique processes and computers: -- Computer: Use *ResourceId* or *ResourceName_s* to uniquely identify a computer within a Log Analytics workspace.-- Process: Use *ResourceId* to uniquely identify a process within a Log Analytics workspace. *ResourceName_s* is unique within the context of the machine on which the process is running (MachineResourceName_s)
+- Computer: Use *_ResourceId* to uniquely identify a computer within a Log Analytics workspace.
+- Process: Use *_ResourceId* to uniquely identify a process within a Log Analytics workspace.
Because multiple records can exist for a specified process and computer in a specified time range, queries can return more than one record for the same computer or process. To include only the most recent record, add `| summarize arg_max(TimeGenerated, *) by ResourceId` to the query. ### Connections and ports
-The Connection Metrics feature introduces two new tables in Azure Monitor logs - VMConnection and VMBoundPort. These tables provide information about the connections for a machine (inbound and outbound), as well as the server ports that are open/active on them. ConnectionMetrics are also exposed via APIs that provide the means to obtain a specific metric during a time window. TCP connections resulting from *accepting* on a listening socket are inbound, while those created by *connecting* to a given IP and port are outbound. The direction of a connection is represented by the Direction property, which can be set to either **inbound** or **outbound**.
+The Connection Metrics feature introduces two new tables in Azure Monitor logs - VMConnection and VMBoundPort. These tables provide information about the connections for a machine (inbound and outbound) and the server ports that are open/active on them. ConnectionMetrics are also exposed via APIs that provide the means to obtain a specific metric during a time window. TCP connections resulting from *accepting* on a listening socket are inbound, while those created by *connecting* to a given IP and port are outbound. The direction of a connection is represented by the Direction property, which can be set to either **inbound** or **outbound**.
Records in these tables are generated from data reported by the Dependency Agent. Every record represents an observation over a 1-minute time interval. The TimeGenerated property indicates the start of the time interval. Each record contains information to identify the respective entity, that is, connection or port, as well as metrics associated with that entity. Currently, only network activity that occurs using TCP over IPv4 is reported.
The following fields and conventions apply to both VMConnection and VMBoundPort:
- Computer: Fully-qualified domain name of reporting machine - AgentId: The unique identifier for a machine with the Log Analytics agent -- Machine: Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId -- Process: Name of the Azure Resource Manager resource for the process exposed by ServiceMap. It is of the form *p-{hex string}*. Process is unique within a machine scope and to generate a unique process ID across machines, combine Machine and Process fields.
+- Machine: Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It's of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId
+- Process: Name of the Azure Resource Manager resource for the process exposed by ServiceMap. It's of the form *p-{hex string}*. Process is unique within a machine scope and to generate a unique process ID across machines, combine Machine and Process fields.
- ProcessName: Executable name of the reporting process. - All IP addresses are strings in IPv4 canonical format, for example *13.107.3.160*
-To manage cost and complexity, connection records do not represent individual physical network connections. Multiple physical network connections are grouped into a logical connection, which is then reflected in the respective table. Meaning, records in *VMConnection* table represent a logical grouping and not the individual physical connections that are being observed. Physical network connection sharing the same value for the following attributes during a given one-minute interval, are aggregated into a single logical record in *VMConnection*.
+To manage cost and complexity, connection records don't represent individual physical network connections. Multiple physical network connections are grouped into a logical connection, which is then reflected in the respective table. Meaning, records in *VMConnection* table represent a logical grouping and not the individual physical connections that are being observed. Physical network connection sharing the same value for the following attributes during a given one-minute interval, are aggregated into a single logical record in *VMConnection*.
| Property | Description | |:--|:--|
In addition to connection count metrics, information about the volume of data se
|ResponseTimeMin |The smallest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.| |ResponseTimeSum |The sum of all response times (milliseconds) observed during the reporting time window. If no value, the property is blank.|
-The third type of data being reported is response time - how long does a caller spend waiting for a request sent over a connection to be processed and responded to by the remote endpoint. The response time reported is an estimation of the true response time of the underlying application protocol. It is computed using heuristics based on the observation of the flow of data between the source and destination end of a physical network connection. Conceptually, it is the difference between the time the last byte of a request leaves the sender, and the time when the last byte of the response arrives back to it. These two timestamps are used to delineate request and response events on a given physical connection. The difference between them represents the response time of a single request.
+The third type of data being reported is response time - how long does a caller spend waiting for a request sent over a connection to be processed and responded to by the remote endpoint. The response time reported is an estimation of the true response time of the underlying application protocol. It's computed using heuristics based on the observation of the flow of data between the source and destination end of a physical network connection. Conceptually, it's the difference between the time the last byte of a request leaves the sender, and the time when the last byte of the response arrives back to it. These two timestamps are used to delineate request and response events on a given physical connection. The difference between them represents the response time of a single request.
-In this first release of this feature, our algorithm is an approximation that may work with varying degree of success depending on the actual application protocol used for a given network connection. For example, the current approach works well for request-response based protocols such as HTTP(S), but does not work with one-way or message queue-based protocols.
+In this first release of this feature, our algorithm is an approximation that may work with varying degree of success depending on the actual application protocol used for a given network connection. For example, the current approach works well for request-response based protocols such as HTTP(S), but doesn't work with one-way or message queue-based protocols.
Here are some important points to consider: 1. If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
-2. Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic.
-3. To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the IsWildcardBind record property with the specific IP address, will be set to "True" to indicate that the port is exposed over every interface of the reporting machine.
+2. Records with wildcard IP will contain no activity. They're included to represent the fact that a port on the machine is open to inbound traffic.
+3. To reduce verbosity and data volume, records with wildcard IP will be omitted when there's a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the IsWildcardBind record property with the specific IP address, will be set to "True" to indicate that the port is exposed over every interface of the reporting machine.
4. Ports that are bound only on a specific interface have IsWildcardBind set to *False*. #### Naming and Classification
-For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions property represents the DNS questions reported by the machine for RemoteIp. The RemoveClassification property is reserved for future use.
+For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it's the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions property represents the DNS questions reported by the machine for RemoteIp. The RemoveClassification property is reserved for future use.
#### Geolocation
For convenience, the IP address of the remote end of a connection is included in
#### Malicious IP
-Every RemoteIp property in *VMConnection* table is checked against a set of IPs with known malicious activity. If the RemoteIp is identified as malicious the following properties will be populated (they are empty, when the IP is not considered malicious) in the following properties of the record:
+Every RemoteIp property in *VMConnection* table is checked against a set of IPs with known malicious activity. If the RemoteIp is identified as malicious the following properties will be populated (they're empty, when the IP isn't considered malicious) in the following properties of the record:
| Property | Description | |:--|:--|
Every RemoteIp property in *VMConnection* table is checked against a set of IPs
|Description |Description of the observed threat. | |TLPLevel |Traffic Light Protocol (TLP) Level is one of the defined values, *White*, *Green*, *Amber*, *Red*. | |Confidence |Values are *0 ΓÇô 100*. |
-|Severity |Values are *0 ΓÇô 5*, where *5* is the most severe and *0* is not severe at all. Default value is *3*. |
+|Severity |Values are *0 ΓÇô 5*, where *5* is the most severe and *0* isn't severe at all. Default value is *3*. |
|FirstReportedDateTime |The first time the provider reported the indicator. | |LastReportedDateTime |The last time the indicator was seen by Interflow. | |IsActive |Indicates indicators are deactivated with *True* or *False* value. |
Port records include metrics representing the connections associated with them.
Here are some important points to consider: - If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported. -- Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic. -- To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the *IsWildcardBind* property for the record with the specific IP address, will be set to *True*. This indicates the port is exposed over every interface of the reporting machine.
+- Records with wildcard IP will contain no activity. They're included to represent the fact that a port on the machine is open to inbound traffic.
+- To reduce verbosity and data volume, records with wildcard IP will be omitted when there's a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the *IsWildcardBind* property for the record with the specific IP address, will be set to *True*. This indicates the port is exposed over every interface of the reporting machine.
- Ports that are bound only on a specific interface have IsWildcardBind set to *False*. ### VMComputer records
Records with a type of *VMComputer* have inventory data for servers with the Dep
|TimeGenerated | Timestamp of the record (UTC) | |Computer | The computer FQDN | |AgentId | The unique ID of the Log Analytics agent |
-|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
+|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It's of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
|DisplayName | Display name | |FullDisplayName | Full display name | |HostName | The name of machine without domain name |
Records with a type of *VMProcess* have inventory data for TCP-connected process
|TimeGenerated | Timestamp of the record (UTC) | |Computer | The computer FQDN | |AgentId | The unique ID of the Log Analytics agent |
-|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It is of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
-|Process | The unique identifier of the Service Map process. It is in the form of *p-{GUID}*.
+|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It's of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. |
+|Process | The unique identifier of the Service Map process. It's in the form of *p-{GUID}*.
|ExecutableName | The name of the process executable | |DisplayName | Process display name | |Role | Process role: *webserver*, *appServer*, *databaseServer*, *ldapServer*, *smbServer* |
The performance counters currently collected into the *InsightsMetrics* table ar
## Next steps
-* If you are new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries.
+* If you're new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries.
* Learn about [writing search queries](../logs/get-started-queries.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 07/24/2023 Last updated : 08/10/2023
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* UAE Central * UAE North * UK South
+* UK West
* West Europe * West US * West US 2
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/) - Jetstream * [Enable App Volume Replication for Horizon VDI on Azure VMware Solution using Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-migration-and/enable-app-volume-replication-for-horizon-vdi-on-azure-vmware/ba-p/3798178) * [Disaster Recovery using cross-region replication with Azure NetApp Files datastores for AVS](https://techcommunity.microsoft.com/t5/azure-architecture-blog/disaster-recovery-using-cross-region-replication-with-azure/ba-p/3870682)
+* [Protecting Azure VMware Solution VMs and datastores on Azure NetApp Files with Cloud Backup for VMs](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-azure-vmware-solution-vms-and-datastores-on-azure/ba-p/3894887)
## Virtual Desktop Infrastructure solutions
azure-netapp-files Dual Protocol Permission Behaviors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dual-protocol-permission-behaviors.md
The following table breaks down the different name mapping permutations and how
| NFSv4.x | UNIX | Numeric ID to UNIX user name | UNIX <br> (mode-bits or NFSv4.x ACLs) | | NFS3/4.x | NTFS | UNIX to Windows | NTFS ACLs <br> (based on mapped Windows user SID) |
-> [!NOTE]
-> NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients by [switching between protocols](convert-nfsv3-nfsv41.md).
-
-Name-mapping rules in Azure NetApp Files can currently be controlled only by using LDAP. There is no option to create explicit name mapping rules within the service.
-
+> [!NOTE]
+> Name-mapping rules in Azure NetApp Files can currently be controlled only by using LDAP. There is no option to create explicit name mapping rules within the service.
## Name services with dual-protocol volumes Regardless of what NAS protocol is used, dual-protocol volumes use name-mapping concepts to handle permissions properly. As such, name services play a critical role in maintaining functionality in environments that use both SMB and NFS for access to volumes.
azure-relay Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/authenticate-application.md
Title: Authenticate from an application - Azure Relay description: This article provides information about authenticating an application with Azure Active Directory to access Azure Relay resources. Previously updated : 07/22/2022 Last updated : 08/10/2023 # Authenticate and authorize an application with Azure Active Directory to access Azure Relay entities
For step-by-step instructions to register your application with Azure AD, see [Q
> Make note of the **Directory (tenant) ID** and the **Application (client) ID**. You will need these values to run the sample application. ### Create a client secret
-The application needs a client secret to prove its identity when requesting a token. In the same article linked above, see the [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret) section to create a client secret.
+The application needs a client secret to prove its identity when requesting a token. In the same article linked earlier, see the [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret) section to create a client secret.
> [!IMPORTANT] > Make note of the **Client Secret**. You will need it to run the sample application.
Assign one of the Azure Relay roles to the application's service principal at th
1. Run the application locally on your computer per the instructions from the [README article](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/dotnet/rolebasedaccesscontrol#rolebasedaccesscontrol-hybrid-connection-sample). > [!NOTE]
- > Follow the same steps above to run the [sample console application for WCF Relay](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl).
+ > Follow the same steps to run the [sample console application for WCF Relay](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl).
#### Highlighted code from the sample Here's the code from the sample that shows how to use Azure AD authentication to connect to the Azure Relay service. 1. Create a [TokenProvider](/dotnet/api/microsoft.azure.relay.tokenprovider) object by using the `TokenProvider.CreateAzureActiveDirectoryTokenProvider` method.
- If you haven't already created an app registration, see the [Register your application with Azure AD](#register-your-application-with-an-azure-ad-tenant) section to create it and then create a client secret as mentioned in the [Create a client secret](#create-a-client-secret) section.
+ If you haven't already created an app registration, see the [Register your application with Azure AD](#register-your-application-with-an-azure-ad-tenant) section to create it, and then create a client secret as mentioned in the [Create a client secret](#create-a-client-secret) section.
If you want to use an existing app registration, follow these instructions to get **Application (client) ID** and **Directory (tenant) ID**.
Here's the code from the sample that shows how to use Azure AD authentication to
1. Search for and select **Azure Active Directory** using the search bar at the top. 1. On the **Azure Active Directory** page, select **App registrations** in the **Manage** section on the left menu. 1. Select your app registration.
- 1. On the page for your app registration, you will see the values for **Application (client) ID** and **Directory (tenant) ID**.
+ 1. On the page for your app registration, you see the values for **Application (client) ID** and **Directory (tenant) ID**.
To get the **client secret**, follow these steps: 1. On the page your app registration, select **Certificates & secrets** on the left menu.
azure-relay Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/authenticate-managed-identity.md
The following section uses a simple application that runs under a managed identi
1. Run RoleBasedAccessControl.exe on the Azure VM as per instructions from the [README document](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/dotnet/rolebasedaccesscontrol#rolebasedaccesscontrol-hybrid-connection-sample). > [!NOTE]
- > Follow the same steps above to run the [console application for WCF Relays](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl).
+ > Follow the same steps to run the [console application for WCF Relays](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl).
#### Highlighted code from the sample Here's the code from the sample that shows how to use Azure AD authentication to connect to the Azure Relay service. 1. Create a [TokenProvider](/dotnet/api/microsoft.azure.relay.tokenprovider) object by using the `TokenProvider.CreateManagedIdentityTokenProvider` method.
- - If you are using a **system-assigned managed identity:**
+ - If you're using a **system-assigned managed identity:**
```csharp TokenProvider.CreateManagedIdentityTokenProvider(); ```
- - If you are using a **user-assigned managed identity**, get the **Client ID** for the user-assigned identity from the **Managed Identity** page in the Azure portal. For instructions, see [List user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#list-user-assigned-managed-identities).
+ - If you're using a **user-assigned managed identity**, get the **Client ID** for the user-assigned identity from the **Managed Identity** page in the Azure portal. For instructions, see [List user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#list-user-assigned-managed-identities).
```csharp var managedCredential = new ManagedIdentityCredential(clientId); tokenProvider = TokenProvider.CreateManagedIdentityTokenProvider(managedCredential);
Here's the code from the sample that shows how to use Azure AD authentication to
- WCF Relay: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl) ## Next steps
-To learn more about Azure Relay, see the following topics.
+To learn more about Azure Relay, see the following articles.
- [What is Relay?](relay-what-is-it.md) - [Get started with Azure Relay Hybrid connections WebSockets](relay-hybrid-connections-dotnet-get-started.md) - [Get stated with Azure Relay Hybrid connections HTTP requests](relay-hybrid-connections-http-requests-dotnet-get-started.md)
azure-relay Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/move-across-regions.md
Title: Move an Azure Relay namespace to another region description: This article shows you how to move an Azure Relay namespace from the current region to another region. Previously updated : 06/21/2022 Last updated : 08/10/2023
This article shows you how to move an Azure Relay namespace from one region to a
WCF relays have two modes. In the first mode, the WCF relay is explicitly created using the Azure portal or Azure Resource Manager template. On the **WCF Relays** page of the Azure portal, you see the **isDynamic** property set to **false** for a relay in this mode.
- In the second mode, the WCF relay is auto-generated when a listener (server) connects for a given endpoint address. As long as the listener is connected to the relay, you see the relay in the list of WCF relays in the Azure portal. For a relay in this mode, the **isDynamic** property is set to **true** because it's dynamically generated. The dynamic WCF relay goes away when the listener disconnects.
+ In the second mode, the WCF relay is autogenerated when a listener (server) connects for a given endpoint address. As long as the listener is connected to the relay, you see the relay in the list of WCF relays in the Azure portal. For a relay in this mode, the **isDynamic** property is set to **true** because it's dynamically generated. The dynamic WCF relay goes away when the listener disconnects.
1. **Deploy** resources using the template to the target region. ## Prerequisites
To get started, export a Resource Manager template. This template contains setti
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **All resources** and then select your Azure Relay namespace.
-3. Select **Export template** under **Settings** in the left menu.
+3. Select **Export template** under **Automation** in the left menu.
4. Choose **Download** on the **Export template** page. :::image type="content" source="./media/move-across-regions/download-template.png" alt-text="Download Resource Manager template"::: 5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice. This zip file contains the template and parameters JSON files. 1. Open the **template.json** file from the extracted folder in an editor of your choice. 1. Search for `location`, and replace the value for the property with the new name for the region. To obtain location codes, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, for example, `West US` is equal to `westus`.
-1. Remove definitions of **dynamic WCF relay** resources (type: `Microsoft.Relay/namespaces/WcfRelays`). Dynamic WCF relays are the ones that have **isDynamic** property set to **true** on the **Relays** page. In the following example, **echoservice** is a dynamic WCF relay and its definition should be removed from the template.
+1. Remove definitions of **dynamic WCF relay** resources (type: `Microsoft.Relay/namespaces/WcfRelays`). Dynamic WCF relays are the ones that have **isDynamic** property set to **true** on the **Relays** page. In the following example, `echoservice` is a dynamic WCF relay and its definition should be removed from the template.
:::image type="content" source="./media/move-across-regions/dynamic-relays.png" alt-text="Dynamic relays":::
azure-relay Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/network-security.md
Title: Network security for Azure Relay description: This article describes how to use IP firewall rules and private endpoints with Azure Relay. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Network security for Azure Relay
By default, Relay namespaces are accessible from internet as long as the request
This feature is helpful in scenarios in which Azure Relay should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Relay with [Azure Express Route](../expressroute/expressroute-faqs.md#supported-services), you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses.
-The IP firewall rules are applied at the Relay namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that does not match an allowed IP rule on the Relay namespace is rejected as unauthorized. The response does not mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
+The IP firewall rules are applied at the Relay namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Relay namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
For more information, see [How to configure IP firewall for a Relay namespace](ip-firewall-virtual-networks.md)
azure-relay Relay Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-api-overview.md
Title: Azure Relay API overview | Microsoft Docs
description: This article provides an overview of available Azure Relay APIs (.NET Standard, .NET Framework, Node.js, etc.) Previously updated : 06/21/2022 Last updated : 08/10/2023 # Available Relay APIs
Hybrid Connections with ASP.NET Core for web services.
#### Node.js
-The Hybrid Connections modules listed in the table above replace or amend
+The Hybrid Connections modules replace or amend
existing Node.js modules with alternative implementations that listen on the Azure Relay service instead of the local networking stack.
azure-relay Relay Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-authentication-and-authorization.md
Title: Azure Relay authentication and authorization | Microsoft Docs description: This article provides an overview of Shared Access Signature (SAS) authentication with the Azure Relay service. Previously updated : 07/22/2022 Last updated : 08/10/2023 # Azure Relay authentication and authorization
azure-relay Relay Create Namespace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-create-namespace-portal.md
Title: Create a Relay namespace using the Azure portal | Microsoft Docs description: This article provides a walkthrough that shows you how to create a Relay namespace using the Azure portal.- Previously updated : 06/21/2022+ Last updated : 08/10/2023 # Create a Relay namespace using the Azure portal
azure-relay Relay Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-exceptions.md
Title: Azure Relay exceptions and how to resolve them | Microsoft Docs description: List of Azure Relay exceptions and suggested actions you can take to help resolve them. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Azure Relay exceptions
The following table lists messaging exception types and their causes. It also no
| **Exception type** | **Description** | **Suggested action** | **Note on automatic or immediate retry** | | | | | |
-| [Timeout](/dotnet/api/system.timeoutexception) |The server did not respond to the requested operation within the specified time, which is controlled by [OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings.operationtimeout). The server might have completed the requested operation. This can happen due to network or other infrastructure delays. |Check the system state for consistency, and then retry, if necessary. See [TimeoutException](#timeoutexception). |Retry might help in some cases; add retry logic to code. |
-| [Invalid Operation](/dotnet/api/system.invalidoperationexception) |The requested user operation is not allowed within the server or service. See the exception message for details. |Check the code and the documentation. Make sure that the requested operation is valid. |Retry will not help. |
-| [Operation Canceled](/dotnet/api/system.operationcanceledexception) |An attempt is made to invoke an operation on an object that has already been closed, aborted, or disposed. In rare cases, the ambient transaction is already disposed. |Check the code and make sure it does not invoke operations on a disposed object. |Retry will not help. |
-| [Unauthorized Access](/dotnet/api/system.unauthorizedaccessexception) |The [TokenProvider](/dotnet/api/microsoft.servicebus.tokenprovider) object could not acquire a token, the token is invalid, or the token does not contain the claims required to perform the operation. |Make sure that the token provider is created with the correct values. Check the configuration of the Access Control service. |Retry might help in some cases; add retry logic to code. |
-| [Argument Exception](/dotnet/api/system.argumentexception),<br /> [Argument Null](/dotnet/api/system.argumentnullexception),<br />[Argument Out Of Range](/dotnet/api/system.argumentoutofrangeexception) |One or more of the following has occurred:<br />One or more arguments supplied to the method are invalid.<br /> The URI supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory.create) contains one or more path segments.<br />The URI scheme supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory.create) is invalid. <br />The property value is larger than 32 KB. |Check the calling code and make sure the arguments are correct. |Retry will not help. |
-| [Server Busy](/dotnet/api/microsoft.servicebus.messaging.serverbusyexception) |Service is not able to process the request at this time. |The client can wait for a period of time, then retry the operation. |The client might retry after a specific interval. If a retry results in a different exception, check the retry behavior of that exception. |
+| [Timeout](/dotnet/api/system.timeoutexception) |The server didn't respond to the requested operation within the specified time, which is controlled by [OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings.operationtimeout). The server might have completed the requested operation. It can happen due to network or other infrastructure delays. |Check the system state for consistency, and then retry, if necessary. See [TimeoutException](#timeoutexception). |Retry might help in some cases; add retry logic to code. |
+| [Invalid Operation](/dotnet/api/system.invalidoperationexception) |The requested user operation isn't allowed within the server or service. See the exception message for details. |Check the code and the documentation. Make sure that the requested operation is valid. |Retry doesn't help. |
+| [Operation Canceled](/dotnet/api/system.operationcanceledexception) |An attempt is made to invoke an operation on an object that has already been closed, aborted, or disposed. In rare cases, the ambient transaction is already disposed. |Check the code and make sure it doesn't invoke operations on a disposed object. |Retry doesn't help. |
+| [Unauthorized Access](/dotnet/api/system.unauthorizedaccessexception) |The [TokenProvider](/dotnet/api/microsoft.servicebus.tokenprovider) object couldn't acquire a token, the token is invalid, or the token doesn't contain the claims required to perform the operation. |Make sure that the token provider is created with the correct values. Check the configuration of the Access Control service. |Retry might help in some cases; add retry logic to code. |
+| [Argument Exception](/dotnet/api/system.argumentexception),<br /> [Argument Null](/dotnet/api/system.argumentnullexception),<br />[Argument Out Of Range](/dotnet/api/system.argumentoutofrangeexception) |One or more of the following has occurred:<br />One or more arguments supplied to the method are invalid.<br /> The URI supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory.create) contains one or more path segments.<br />The URI scheme supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory.create) is invalid. <br />The property value is larger than 32 KB. |Check the calling code and make sure the arguments are correct. |Retry doesn't help. |
+| [Server Busy](/dotnet/api/microsoft.servicebus.messaging.serverbusyexception) |Service isn't able to process the request at this time. |The client can wait for a period of time, then retry the operation. |The client might retry after a specific interval. If a retry results in a different exception, check the retry behavior of that exception. |
| [Quota Exceeded](/dotnet/api/microsoft.servicebus.messaging.quotaexceededexception) |The messaging entity has reached its maximum allowable size. |Create space in the entity by receiving messages from the entity or its subqueues. See [QuotaExceededException](#quotaexceededexception). |Retry might help if messages have been removed in the meantime. |
-| [Message Size Exceeded](/dotnet/api/microsoft.servicebus.messaging.messagesizeexceededexception) |A message payload exceeds the 256-KB limit. Note that the 256-KB limit is the total message size. The total message size can include system properties and any Microsoft .NET overhead. |Reduce the size of the message payload, then retry the operation. |Retry will not help. |
+| [Message Size Exceeded](/dotnet/api/microsoft.servicebus.messaging.messagesizeexceededexception) |A message payload exceeds the 256-KB limit. Note that the 256-KB limit is the total message size. The total message size can include system properties and any Microsoft .NET overhead. |Reduce the size of the message payload, then retry the operation. |Retry doesn't help. |
## QuotaExceededException [QuotaExceededException](/dotnet/api/microsoft.servicebus.messaging.quotaexceededexception) indicates that a quota for a specific entity has been exceeded.
-For Relay, this exception wraps the [System.ServiceModel.QuotaExceededException](/dotnet/api/system.servicemodel.quotaexceededexception), which indicates that the maximum number of listeners has been exceeded for this endpoint. This is indicated in the **MaximumListenersPerEndpoint** value of the exception message.
+For Relay, this exception wraps the [System.ServiceModel.QuotaExceededException](/dotnet/api/system.servicemodel.quotaexceededexception), which indicates that the maximum number of listeners has been exceeded for this endpoint. It's indicated in the **MaximumListenersPerEndpoint** value of the exception message.
## TimeoutException A [TimeoutException](/dotnet/api/system.timeoutexception) indicates that a user-initiated operation is taking longer than the operation timeout.
There are two common causes for this error:
The operation timeout might be too small for the operational condition. The default value for the operation timeout in the client SDK is 60 seconds. Check to see whether the value in your code is set to something too small. Note that CPU usage and the condition of the network can affect the time it takes for an operation to complete. It's a good idea not to set the operation timeout to a very small value. * **Transient service error**
- Occasionally, the Relay service might experience delays in processing requests. This might happen, for example, during periods of high traffic. If this occurs, retry your operation after a delay, until the operation is successful. If the same operation continues to fail after multiple attempts, check the [Azure service status site](https://azure.microsoft.com/status/) to see if there are known service outages.
+ Occasionally, the Relay service might experience delays in processing requests. It might happen, for example, during periods of high traffic. If it occurs, retry your operation after a delay, until the operation is successful. If the same operation continues to fail after multiple attempts, check the [Azure service status site](https://azure.microsoft.com/status/) to see if there are known service outages.
## Next steps * [Azure Relay FAQs](relay-faq.yml)
azure-relay Relay Hybrid Connections Dotnet Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-dotnet-api-overview.md
Title: Overview of Azure Relay .NET Standard APIs | Microsoft Docs
description: This article summarizes some of the key an overview of Azure Relay Hybrid Connections .NET Standard API. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Azure Relay Hybrid Connections .NET Standard API overview
catch (ArgumentException ae)
## Hybrid connection stream
-The [HybridConnectionStream][HCStream] class is the primary object used to send and receive data from an Azure Relay endpoint, whether you are working with a [HybridConnectionClient][HCClient], or a [HybridConnectionListener][HCListener].
+The [HybridConnectionStream][HCStream] class is the primary object used to send and receive data from an Azure Relay endpoint, whether you're working with a [HybridConnectionClient][HCClient], or a [HybridConnectionListener][HCListener].
### Getting a Hybrid connection stream
var hybridConnectionStream = await client.CreateConnectionAsync();
### Receiving data
-The [HybridConnectionStream][HCStream] class enables two-way communication. In most cases, you continuously receive from the stream. If you are reading text from the stream, you might also want to use a [StreamReader](/dotnet/api/system.io.streamreader) object, which enables easier parsing of the data. For example, you can read data as text, rather than as `byte[]`.
+The [HybridConnectionStream][HCStream] class enables two-way communication. In most cases, you continuously receive from the stream. If you're reading text from the stream, you might also want to use a [StreamReader](/dotnet/api/system.io.streamreader) object, which enables easier parsing of the data. For example, you can read data as text, rather than as `byte[]`.
The following code reads individual lines of text from the stream until a cancellation is requested:
while (!cancellationToken.IsCancellationRequested)
### Sending data
-Once you have a connection established, you can send a message to the Relay endpoint. Because the connection object inherits [Stream](/dotnet/api/system.io.stream), send your data as a `byte[]`. The following example shows how to do this:
+Once you have a connection established, you can send a message to the Relay endpoint. Because the connection object inherits [Stream](/dotnet/api/system.io.stream), send your data as a `byte[]`. The following example shows how to do it:
```csharp var data = Encoding.UTF8.GetBytes("hello");
azure-relay Relay Hybrid Connections Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-dotnet-get-started.md
Title: Azure Relay Hybrid Connections - WebSockets in .NET description: Write a C# console application for Azure Relay Hybrid Connections WebSockets.-+ Previously updated : 06/21/2022 Last updated : 08/10/2023 # Get started with Relay Hybrid Connections WebSockets in .NET [!INCLUDE [relay-selector-hybrid-connections](./includes/relay-selector-hybrid-connections.md)]
-In this quickstart, you create .NET sender and receiver applications that send and receive messages by using Hybrid Connections WebSockets in Azure Relay.
-To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md).
+In this quickstart, you create .NET sender and receiver applications that send and receive messages by using Hybrid Connections WebSockets in Azure Relay. To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md).
In this quickstart, you take the following steps:
azure-relay Relay Hybrid Connections Http Requests Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-http-requests-dotnet-get-started.md
Title: Azure Relay Hybrid Connections - HTTP requests in .NET description: Write a C# console application for Azure Relay Hybrid Connections HTTP requests in .NET.-+ Previously updated : 09/26/2022 Last updated : 08/10/2023 # Get started with Relay Hybrid Connections HTTP requests in .NET
azure-relay Relay Hybrid Connections Http Requests Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-http-requests-node-get-started.md
Title: Azure Relay Hybrid Connections - HTTP requests in Node.js description: Write a Node.js console application for Azure Relay Hybrid Connections HTTP requests.-+ Last updated 06/21/2022
azure-relay Relay Hybrid Connections Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-node-get-started.md
Title: Azure Relay Hybrid Connections - WebSockets in Node description: Write a Node.js console application for Azure Relay Hybrid Connections WebSockets-+ Last updated 06/21/2022
azure-relay Relay Hybrid Connections Node Ws Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-node-ws-api-overview.md
Title: Overview of the Azure Relay Node APIs | Microsoft Docs description: This article provides an overview of the Node.js API for the Azure Relay service. It also shows how to use the hyco-ws Node package. Previously updated : 06/21/2022 Last updated : 08/10/2023
listenUri = WebSocket.appendRelayToken(listenUri, 'ruleName', '...key...')
```
-The helper methods are for use with this package, but can also be used by a Node server for enabling web or device clients to create listeners or senders. The server uses these methods by passing them URIs that embed short-lived tokens. These URIs can also be used with common WebSocket stacks that do not support setting HTTP headers for the WebSocket handshake. Embedding authorization tokens into the URI is supported primarily for those library-external usage scenarios.
+The helper methods are for use with this package, but can also be used by a Node server for enabling web or device clients to create listeners or senders. The server uses these methods by passing them URIs that embed short-lived tokens. These URIs can also be used with common WebSocket stacks that don't support setting HTTP headers for the WebSocket handshake. Embedding authorization tokens into the URI is supported primarily for those library-external usage scenarios.
#### createRelayListenUri
URI can then be used with the relay version of the WebSocketServer class.
- `token` (optional) - a previously issued Relay access token that is embedded in the listener URI (see the following example). - `id` (optional) - a tracking identifier that enables end-to-end diagnostics tracking of requests.
-The `token` value is optional and should only be used when it is not possible to send HTTP headers along with the WebSocket handshake, as is the case with the W3C WebSocket stack.
+The `token` value is optional and should only be used when it isn't possible to send HTTP headers along with the WebSocket handshake, as is the case with the W3C WebSocket stack.
#### createRelaySendUri
URI can be used with any WebSocket client.
- `token` (optional) - a previously issued Relay access token that is embedded in the send URI (see the following example). - `id` (optional) - a tracking identifier that enables end-to-end diagnostics tracking of requests.
-The `token` value is optional and should only be used when it is not possible to send HTTP headers along with the WebSocket handshake, as is the case with the W3C WebSocket stack.
+The `token` value is optional and should only be used when it isn't possible to send HTTP headers along with the WebSocket handshake, as is the case with the W3C WebSocket stack.
#### createRelayToken
returns the token correctly appended to the input URI.
### Class ws.RelayedServer
-The `hycows.RelayedServer` class is an alternative to the `ws.Server` class that does not listen on the local network, but delegates listening to the Azure Relay service.
+The `hycows.RelayedServer` class is an alternative to the `ws.Server` class that doesn't listen on the local network, but delegates listening to the Azure Relay service.
The two classes are mostly contract compatible, meaning that an existing application using the `ws.Server` class can easily be changed to use the relayed version. The main differences are in the constructor and in the available options.
var wss = new server(
}); ```
-The `RelayedServer` constructor supports a different set of arguments than the `Server`, because it is not a standalone listener, or able to be embedded into an existing HTTP listener framework. There are also fewer options available since the WebSocket management is largely delegated to the Relay service.
+The `RelayedServer` constructor supports a different set of arguments than the `Server`, because it isn't a standalone listener, or able to be embedded into an existing HTTP listener framework. There are also fewer options available since the WebSocket management is largely delegated to the Relay service.
Constructor arguments:
Emitted when a new WebSocket connection is accepted. The object is of type `ws.W
function(error) ```
-If the underlying server emits an error, it is forwarded here.
+If the underlying server emits an error, it's forwarded here.
#### Helpers
azure-relay Relay Hybrid Connections Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-protocol.md
Title: Azure Relay Hybrid Connections protocol guide | Microsoft Docs
description: This article describes the client-side interactions with the Hybrid Connections relay for connecting clients in listener and sender roles. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Azure Relay Hybrid Connections protocol
in progress.
#### Renew operation The security token that must be used to register the listener and maintain the
-control channel may expire while the listener is active. The token expiry does
-not affect ongoing connections, but it does cause the control channel to be
+control channel may expire while the listener is active. The token expiry doesn't affect ongoing connections, but it does cause the control channel to be
dropped by the service at or soon after the moment of expiry. The "renew" operation is a JSON message that the listener can send to replace the token associated with the control channel, so that the control channel can be
reconnect.
### Sender interaction The sender has two interactions with the service: it connects a Web Socket or
-it sends requests via HTTPS. Requests cannot be sent over a Web Socket from the
+it sends requests via HTTPS. Requests can't be sent over a Web Socket from the
sender role. #### Connect operation
information as follows:
is present, the header will be evaluated and stripped. Otherwise, the `Authorization`is always passed on as-is.
-If there is no active listener, the service will return a 502 "Bad Gateway"
-error code. If the service does not appear to handle the request, the service
+If there's no active listener, the service will return a 502 "Bad Gateway"
+error code. If the service doesn't appear to handle the request, the service
will return a 504 "Gateway Timeout" after 60 seconds. ### Interaction summary
previously.
All WebSocket connections are made on port 443 as an upgrade from HTTPS 1.1, which is commonly abstracted by some WebSocket framework or API. The
-description here is kept implementation neutral, without suggesting a specific
+description here's kept implementation neutral, without suggesting a specific
framework. ### Listener protocol
Azure support personnel:
If the WebSocket connection is intentionally shut down by the service after it was initially set up, the reason for doing so is communicated using an appropriate WebSocket protocol error code along with a descriptive error
-message that also includes a tracking ID. The service will not shut down the
+message that also includes a tracking ID. The service won't shut down the
control channel without encountering an error condition. Any clean shutdown is client controlled.
client controlled.
The "accept" notification is sent by the service to the listener over the previously established control channel as a JSON message in a WebSocket text
-frame. There is no reply to this message.
+frame. There's no reply to this message.
The message contains a JSON object named "accept", which defines the following properties at this time:
properties at this time:
* **address** ΓÇô the URL string to be used for establishing the WebSocket to the service to accept an incoming connection. * **id** ΓÇô the unique identifier for this connection. If the ID was supplied by
- the sender client, it is the sender supplied value, otherwise it is a system-generated value.
+ the sender client, it's the sender supplied value, otherwise it's a system-generated value.
* **connectHeaders** ΓÇô all HTTP headers that have been supplied to the Relay endpoint by the sender, which also includes the Sec-WebSocket-Protocol and the Sec-WebSocket-Extensions headers.
establish the WebSocket for accepting or rejecting the sender socket.
To accept, the listener establishes a WebSocket connection to the provided address.
-If the "accept" message carries a `Sec-WebSocket-Protocol` header, it is
+If the "accept" message carries a `Sec-WebSocket-Protocol` header, it's
expected that the listener only accepts the WebSocket if it supports that protocol. Additionally, it sets the header as the WebSocket is established.
deciding whether to accept the connection.
For more information, see the following "Sender Protocol" section.
-If there is an error, the service can reply as follows:
+If there's an error, the service can reply as follows:
| Code | Error | Description | - | -- | --
If there is an error, the service can reply as follows:
handshake so that the status code and status description communicating the reason for the rejection can flow back to the sender.
- The protocol design choice here is to use a WebSocket handshake (that is
+ The protocol design choice here's to use a WebSocket handshake (that is
designed to end in a defined error state) so that listener client
- implementations can continue to rely on a WebSocket client and do not need to
+ implementations can continue to rely on a WebSocket client and don't need to
employ an extra, bare HTTP client. To reject the socket, the client takes the address URI from the `accept`
the control channel. The same message is also sent over the rendezvous
WebSocket once established. The `request` consists of two parts: a header and binary body frame(s).
-If there is no body, the body frames are omitted. The boolean `body` property indicates whether a body is present in the request
+If there's no body, the body frames are omitted. The boolean `body` property indicates whether a body is present in the request
message. For a request with a request body, the structure may look like this:
maintaining the connection might result in the listener getting blocked.
Responses may be sent in any order, but each request must be responded to within 60 seconds or the delivery will be reported as having failed. The 60-second deadline is counted until the `response` frame has been received
-by the service. An ongoing response with multiple binary frames cannot
-become idle for more than 60 seconds or it is terminated.
+by the service. An ongoing response with multiple binary frames can't
+become idle for more than 60 seconds or it's terminated.
If the request is received over the control channel, the response MUST either be sent on the control channel from where the request was received
the rendezvous socket, but contains the following parameters:
| -- | -- | - | `sb-hc-action` | Yes | For accepting a socket, the parameter must be `sb-hc-action=request`
-If there is an error, the service can reply as follows:
+If there's an error, the service can reply as follows:
| Code | Error | Description | - | | --
property at this time:
``` If the token validation fails, access is denied, and the cloud service closes
-the control channel WebSocket with an error. Otherwise there is no reply.
+the control channel WebSocket with an error. Otherwise there's no reply.
| WS Status | Description | | | - |
azure-relay Relay Metrics Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-metrics-azure-monitor.md
Title: Azure Relay metrics in Azure Monitor | Microsoft Docs
description: This article provides information on how you can use Azure Monitor to monitor to state of Azure Relay. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Azure Relay metrics in Azure Monitor
Azure Monitor provides unified user interfaces for monitoring across various Azu
Azure Monitor provides multiple ways to access metrics. You can either access metrics through the [Azure portal](https://portal.azure.com), or use the Azure Monitor APIs (REST and .NET) and analysis solutions such as Operation Management Suite and Event Hubs. For more information, see [Monitoring data collected by Azure Monitor](../azure-monitor/data-platform.md).
-Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to retain data for a longer period of time, you can archive metrics data to an Azure Storage account. This is configured in [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) in Azure Monitor.
+Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to retain data for a longer period of time, you can archive metrics data to an Azure Storage account. It's configured in [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) in Azure Monitor.
## Access metrics in the portal
You can monitor metrics over time in the [Azure portal](https://portal.azure.com
![A page titled "Monitor - Metrics (preview)" shows a line graph of memory usage for the last 30 days.][1]
-You can also access metrics directly via the namespace. To do so, select your namespace and then click **Metrics**.
+You can also access metrics directly via the namespace. To do so, select your namespace and then select **Metrics**.
For metrics supporting dimensions, you must filter with the desired dimension value. ## Billing
-Using metrics in Azure Monitor is currently free while in preview. However, if you use additional solutions that ingest metrics data, you may be billed by these solutions. For example, you are billed by Azure Storage if you archive metrics data to an Azure Storage account. You are also billed by Azure Monitor logs if you stream metrics data to Azure Monitor logs for advanced analysis.
+Using metrics in Azure Monitor is currently free while in preview. However, if you use additional solutions that ingest metrics data, you may be billed by these solutions. For example, you're billed by Azure Storage if you archive metrics data to an Azure Storage account. You're also billed by Azure Monitor logs if you stream metrics data to Azure Monitor logs for advanced analysis.
The following metrics give you an overview of the health of your service.
All metrics values are sent to Azure Monitor every minute. The time granularity
## Metrics dimensions
-Azure Relay supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you do not add dimensions, metrics are specified at the namespace level.
+Azure Relay supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
|Dimension name|Description| | - | -- |
azure-relay Relay Migrate Acs Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-migrate-acs-sas.md
Title: Azure Relay - Migrate to Shared Access Signature authorization description: Describes how to migrate Azure Relay applications from using Azure Active Directory Access Control Service to Shared Access Signature authorization. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Azure Relay - Migrate from Azure Active Directory Access Control Service to Shared Access Signature authorization Azure Relay applications historically had a choice of using two different authorization models: the [Shared Access Signature (SAS)](../service-bus-messaging/service-bus-sas.md) token model provided directly by the Relay service, and a federated model where the management of authorization rules is managed inside by the [Azure Active Directory](../active-directory/index.yml) Access Control Service (ACS), and tokens obtained from ACS are passed to Relay for authorizing access to the desired features.
-The ACS authorization model has long been superseded by [SAS authorization](../service-bus-messaging/service-bus-authentication-and-authorization.md) as the preferred model, and all documentation, guidance, and samples exclusively use SAS today. Moreover, it is no longer possible to create new Relay namespaces that are paired with ACS.
+The ACS authorization model has long been superseded by [SAS authorization](../service-bus-messaging/service-bus-authentication-and-authorization.md) as the preferred model, and all documentation, guidance, and samples exclusively use SAS today. Moreover, it's no longer possible to create new Relay namespaces that are paired with ACS.
-SAS has the advantage in that it is not immediately dependent on another service, but can be used directly from a client without any intermediaries by giving the client access to the SAS rule name and rule key. SAS can also be easily integrated with an approach in which a client has to first pass an authorization check with another service and then is issued a token. The latter approach is similar to the ACS usage pattern, but enables issuing access tokens based on application-specific conditions that are difficult to express in ACS.
+SAS has the advantage in that it isn't immediately dependent on another service, but can be used directly from a client without any intermediaries by giving the client access to the SAS rule name and rule key. SAS can also be easily integrated with an approach in which a client has to first pass an authorization check with another service and then is issued a token. The latter approach is similar to the ACS usage pattern, but enables issuing access tokens based on application-specific conditions that are difficult to express in ACS.
For all existing applications that are dependent on ACS, we urge customers to migrate their applications to rely on SAS instead.
For all existing applications that are dependent on ACS, we urge customers to mi
ACS and Relay are integrated through the shared knowledge of a *signing key*. The signing key is used by an ACS namespace to sign authorization tokens, and it's used by Azure Relay to verify that the token has been issued by the paired ACS namespace. The ACS namespace holds service identities and authorization rules. The authorization rules define which service identity or which token issued by an external identity provider gets which type of access to a part of the Relay namespace graph, in the form of a longest-prefix match.
-For example, an ACS rule might grant the **Send** claim on the path prefix `/` to a service identity, which means that a token issued by ACS based on that rule grants the client rights to send to all entities in the namespace. If the path prefix is `/abc`, the identity is restricted to sending to entities named `abc` or organized beneath that prefix. It is assumed that readers of this migration guidance are already familiar with these concepts.
+For example, an ACS rule might grant the **Send** claim on the path prefix `/` to a service identity, which means that a token issued by ACS based on that rule grants the client rights to send to all entities in the namespace. If the path prefix is `/abc`, the identity is restricted to sending to entities named `abc` or organized beneath that prefix. It's assumed that readers of this migration guidance are already familiar with these concepts.
The migration scenarios fall into three broad categories:
-1. **Unchanged defaults**. Some customers use a [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) object, passing the automatically generated **owner** service identity and its secret key for the ACS namespace, paired with the Relay namespace, and do not add new rules.
+1. **Unchanged defaults**. Some customers use a [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) object, passing the automatically generated **owner** service identity and its secret key for the ACS namespace, paired with the Relay namespace, and don't add new rules.
2. **Custom service identities with simple rules**. Some customers add new service identities and grant each new service identity **Send**, **Listen**, and **Manage** permissions for one specific entity.
For assistance with the migration of complex rule sets, you can contact [Azure s
### Unchanged defaults
-If your application has not changed ACS defaults, you can replace all [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) usage with a [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) object, and use the namespace preconfigured **RootManageSharedAccessKey** instead of the ACS **owner** account. Note that even with the ACS **owner** account, this configuration was (and still is) not generally recommended, because this account/rule provides full management authority over the namespace, including permission to delete any entities.
+If your application hasn't changed ACS defaults, you can replace all [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) usage with a [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) object, and use the namespace preconfigured **RootManageSharedAccessKey** instead of the ACS **owner** account. Note that even with the ACS **owner** account, this configuration was (and still is) not generally recommended, because this account/rule provides full management authority over the namespace, including permission to delete any entities.
### Simple rules If the application uses custom service identities with simple rules, the migration is straightforward in the case where an ACS service identity was created to provide access control on a specific relay. This scenario is often the case in SaaS-style solutions where each relay is used as a bridge to a tenant site or branch office, and the service identity is created for that particular site. In this case, the respective service identity can be migrated to a Shared Access Signature rule, directly on the relay. The service identity name can become the SAS rule name and the service identity key can become the SAS rule key. The rights of the SAS rule are then configured equivalent to the respectively applicable ACS rule for the entity.
-You can make this new and additional configuration of SAS in-place on any existing namespace that is federated with ACS, and the migration away from ACS is subsequently performed by using [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) instead of [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider). The namespace does not need to be unlinked from ACS.
+You can make this new and additional configuration of SAS in-place on any existing namespace that is federated with ACS, and the migration away from ACS is subsequently performed by using [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) instead of [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider). The namespace doesn't need to be unlinked from ACS.
### Complex rules
-SAS rules are not meant to be accounts, but are named signing keys associated with rights. As such, scenarios in which the application creates many service identities and grants them access rights for several entities or the whole namespace still require a token-issuing intermediary. You can obtain guidance for such an intermediary by [contacting support](https://azure.microsoft.com/support/options/).
+SAS rules aren't meant to be accounts, but are named signing keys associated with rights. As such, scenarios in which the application creates many service identities and grants them access rights for several entities or the whole namespace still require a token-issuing intermediary. You can obtain guidance for such an intermediary by [contacting support](https://azure.microsoft.com/support/options/).
## Next steps
-To learn more about Azure Relay authentication, see the following topics:
+To learn more about Azure Relay authentication, see the following articles:
* [Azure Relay authentication and authorization](relay-authentication-and-authorization.md) * [Service Bus authentication with Shared Access Signatures](../service-bus-messaging/service-bus-sas.md)
azure-relay Relay Port Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-port-settings.md
Title: Azure Relay port settings | Microsoft Docs description: This article includes a table that describes the required configuration for port values for Azure Relay. Previously updated : 06/21/2022 Last updated : 08/10/2023 # Azure Relay port settings
azure-relay Service Bus Dotnet Hybrid App Using Service Bus Relay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md
Title: Azure Windows Communication Foundation (WCF) Relay hybrid on-premises/clo
description: Learn how to expose an on-premises WCF service to a web application in the cloud by using Azure Relay Previously updated : 06/21/2022 Last updated : 08/10/2023 # Expose an on-premises WCF service to a web application in the cloud by using Azure Relay
Make the following code changes to your solution:
} ```
-1. In **Solution Explorer**, double-click **App.config** to open the file in the Visual Studio editor. At the bottom of the `<system.ServiceModel>` element, but still within `<system.ServiceModel>`, add the following XML code. Be sure to replace `yourServiceNamespace` with the name of your namespace, and `yourKey` with the SAS key you retrieved earlier from the portal:
+1. In **Solution Explorer**, double-click **App.config** to open the file in the Visual Studio editor. At the bottom of the `<system.ServiceModel>` element, but still within `<system.ServiceModel>`, add the following XML code.
+
+ > [!IMPORTANT]
+ > Replace `yourServiceNamespace` with the name of your namespace, and `yourKey` with the SAS key you retrieved earlier from the portal:
```xml
- <system.serviceModel>
- ...
<services> <service name="ProductsServer.ProductsService"> <endpoint address="sb://yourServiceNamespace.servicebus.windows.net/products" binding="netTcpRelayBinding" contract="ProductsServer.IProducts" behaviorConfiguration="products"/>
Make the following code changes to your solution:
</behavior> </endpointBehaviors> </behaviors>
- </system.serviceModel>
``` > [!NOTE]
Make the following code changes to your solution:
1. Still in *App.config*, in the `<appSettings>` element, replace the connection string value with the connection string you previously obtained from the portal.
+
+ ```xml <appSettings> <!-- Service Bus specific app settings for messaging connections -->
The next step is to hook up the on-premises products server with the ASP.NET app
![Add as a link][24]
-1. Now open the *HomeController.cs* file in the Visual Studio editor and replace the namespace definition with the following code. Be sure to replace `yourServiceNamespace` with the name of your service namespace, and `yourKey` with your SAS key. This code lets the client call the on-premises service, returning the result of the call.
+1. Now open the *HomeController.cs* file in the Visual Studio editor and replace the namespace definition with the following code. Be sure to replace `yourServiceNamespace` with the name of your Relay namespace, and `yourKey` with your SAS key. This code lets the client call the on-premises service, returning the result of the call.
```csharp namespace ProductsWeb.Controllers
azure-relay Service Bus Relay Rest Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-relay-rest-tutorial.md
Title: 'Tutorial: REST tutorial using Azure Relay'
description: 'Tutorial: Build an Azure Relay host application that exposes a REST-based interface.' Previously updated : 06/21/2022 Last updated : 08/11/2023 # Tutorial: Azure WCF Relay REST tutorial
The primary difference between a WCF contract and a REST-style contract is the a
This step adds references to Service Bus and *System.ServiceModel.dll*. This package automatically adds references to the Service Bus libraries and the WCF `System.ServiceModel`. 1. Explicitly add a reference to `System.ServiceModel.Web.dll` to the project. In **Solution Explorer**, right-click **References** under the project folder, and select **Add Reference**.
-1. In **Add Reference**, select **Framework** and enter *System.ServiceModel.Web* in **Search**. Select the **System.ServiceModel.Web** check box, then click **OK**.
+1. In **Add Reference**, select **Framework** and enter *System.ServiceModel.Web* in **Search**. Select the **System.ServiceModel.Web** check box, then select **OK**.
Next, make the following code changes to the project:
As with the previous steps, there's little difference between implementing a RES
1. In **Solution Explorer**, double-click **App.config** to open the file in the Visual Studio editor.
- The *App.config* file includes the service name, endpoint, and binding. The endpoint is the location Azure Relay exposes for clients and hosts to communicate with each other. The binding is the type of protocol that is used to communicate. The main difference here is that the configured service endpoint refers to a [WebHttpRelayBinding](/dotnet/api/microsoft.servicebus.webhttprelaybinding) binding.
+ The *App.config* file includes the service name, endpoint, and binding. The endpoint is the location Azure Relay exposes for clients and hosts to communicate with each other. The binding is the type of protocol that is used to communicate. The main difference here's that the configured service endpoint refers to a [WebHttpRelayBinding](/dotnet/api/microsoft.servicebus.webhttprelaybinding) binding.
1. The `<system.serviceModel>` XML element is a WCF element that defines one or more services. Here, it's used to define the service name and endpoint. At the bottom of the `<system.serviceModel>` element, but still within `<system.serviceModel>`, add a `<bindings>` element that has the following content:
azure-relay Service Bus Relay Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-relay-tutorial.md
Title: Expose an on-prem WCF REST service to clients using Azure Relay
+ Title: Expose an on-premises WCF REST service to clients using Azure Relay
description: This tutorial describes how to expose an on-premises WCF REST service to an external client by using Azure WCF Relay. Previously updated : 06/21/2022 Last updated : 08/11/2023 # Tutorial: Expose an on-premises WCF REST service to external client by using Azure WCF Relay
-This tutorial describes how to build a WCF Relay client application and service using Azure Relay. For a similar tutorial that uses [Service Bus messaging](../service-bus-messaging/service-bus-messaging-overview.md), see [Get started with Service Bus queues](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md).
-
-Working through this tutorial gives you an understanding of the steps to create a WCF Relay client and service application. Like their original WCF counterparts, a service is a construct that exposes one or more endpoints. Each endpoint exposes one or more service operations. The endpoint of a service specifies an address where the service can be found, a binding that contains the information that a client must communicate with the service, and a contract that defines the functionality provided by the service to its clients. The main difference between WCF and WCF Relay is that the endpoint is exposed in the cloud instead of locally on your computer.
+This tutorial describes how to build a WCF Relay client application and a service using Azure Relay. Like their original WCF counterparts, a service is a construct that exposes one or more endpoints. Each **endpoint** exposes one or more service operations. The endpoint of a service specifies an **address** where the service can be found, a **binding** that contains the information that a client must communicate with the service, and a **contract** that defines the functionality provided by the service to its clients. The main difference between WCF and WCF Relay is that the endpoint is exposed in the cloud instead of locally on your computer.
After you work through the sequence of sections in this tutorial, you'll have a running service. You'll also have a client that can invoke the operations of the service.
Creating an Azure relay requires that you first create the contract by using an
The configuration file is similar to a WCF configuration file. It includes the service name, endpoint, and binding. The endpoint is the location Azure Relay exposes for clients and hosts to communicate with each other. The binding is the type of protocol that is used to communicate. The main difference is that this configured service endpoint refers to a [NetTcpRelayBinding](/dotnet/api/microsoft.servicebus.nettcprelaybinding) binding, which isn't part of the .NET Framework. [NetTcpRelayBinding](/dotnet/api/microsoft.servicebus.nettcprelaybinding) is one of the bindings defined by the service. 1. In **Solution Explorer**, double-click **App.config** to open the file in the Visual Studio editor.
-1. In the `<appSettings>` element, replace the placeholders with the name of your service namespace, and the SAS key that you copied in an earlier step.
+1. In the `<appSettings>` element, replace the placeholders with the name of your Azure Relay namespace, and the SAS key that you copied in an earlier step.
1. Within the `<system.serviceModel>` tags, add a `<services>` element. You can define multiple relay applications in a single configuration file. However, this tutorial defines only one. ```xml
namespace Microsoft.ServiceBus.Samples
1. Both console windows open and prompt you for the namespace name. The service must run first, so in the **EchoService** console window, enter the namespace and then select Enter. 1. Next, the console prompts you for your SAS key. Enter the SAS key and select Enter.
- Here is example output from the console window. The values here are just examples.
+ Here's example output from the console window. The values here are just examples.
`Your Service Namespace: myNamespace`
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r
> [!IMPORTANT] > The private container registry is only available to users with the required access. However, it's accessed through the public internet. For more security, you can require access through a private endpoint. See [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md).
+>
+> The private container registry must have the policy `azureADAuthenticationAsArmPolicy` set to `enabled`. If `azureADAuthenticationAsArmPolicy` is set to `disabled`, you'll get a 401 (Unauthorized) error message when publishing modules. See [Azure Container Registry introduces the Conditional Access policy](../../container-registry/container-registry-enable-conditional-access-policy.md).
## Publish files to registry
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The maximum number of private endpoints per Azure SQL Database logical server is
For more information, see [Virtual machine sizes](../../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). +
+For more information see [VM Applications](../../virtual-machines/vm-applications.md).
+ #### Disk encryption sets There's a limitation of 1000 disk encryption sets per region, per subscription. For more
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
If you embed Azure AI Video Indexer player, you can choose the size of the playe
For example:
-`<iframe width="640" height="360" src="https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/" frameborder="0" allowfullscreen />`
+`> [!VIDEO https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/]>/<videoId>/" frameborder="0" allowfullscreen />`
By default, Azure AI Video Indexer player has autogenerated closed captions that are based on the transcript of the video. The transcript is extracted from the video with the source language that was selected when the video was uploaded.
azure-web-pubsub Reference Client Sdk Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-client-sdk-javascript.md
As shown in the diagram, your clients establish WebSocket connections with your
## Getting started ### Prerequisites-- [LTS versions of Node.js](https://nodejs.org/about/releases/)
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
- An Azure subscription - A Web PubSub resource
export AZURE_LOG_LEVEL=verbose
For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger). ### Live Trace
-Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource.
+Use [Live Trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) from Azure portal to inspect live message traffic through your Web PubSub resource.
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
Below claims are required to be included in the JWT token.
Claim Type | Is Required | Description ||
-`aud` | true | Should be the **SAME** as your HTTP request url, trailing slash and query parameters not included. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub`.
+`aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`.
`exp` | true | Epoch time when this token will be expired. A pseudo code in JS:
azure-web-pubsub Socketio Build Realtime Code Streaming App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-build-realtime-code-streaming-app.md
+
+ Title: Build a real-time code streaming app using Socket.IO and host it on Azure
+description: An end-to-end tutorial demonstrating how to build an app that allows coders to share coding activities with their audience in real time using Web PubSub for Socket.IO
++ Last updated : 08/01/2023++++
+# Build a real-time code streaming app using Socket.IO and host it on Azure
+
+Building a real-time experience like the cocreation feature from [Microsoft Word](https://www.microsoft.com/microsoft-365/word) can be challenging.
+
+Through its easy-to-use APIs, [Socket.IO](https://socket.io/) has proven itself as a battle-tested library for real-time communication between clients and server. However, Socket.IO users often report difficulty around scaling Socket.IO's connections. With Web PubSub for Socket.IO, developers no longer need to worry about managing persistent connections.
+
+## Overview
+This tutorial shows how to build an app that allows a coder to stream his/her coding activities to an audience. We build this application using
+>[!div class="checklist"]
+> * Monitor Editor, the code editor that powers VS code
+> * [Express](https://expressjs.com/), a Node.js web framework
+> * APIs provided by Socket.IO library for real-time communication
+> * Host Socket.IO connections using Web PubSub for Socket.IO
+
+### The finished app
+The finished app allows a code editor user to share a web link through which people can watch him/her typing.
++
+To keep this tutorial focused and digestible in around 15 minutes, we define two user roles and what they can do in the editor
+- a writer, who can type in the online editor and the content is streamed
+- viewers, who receive real-time content typed by the writer and cannot edit the content
+
+### Architecture
+| / | Purpose | Benefits |
+|-|-||
+|[Socket.IO library](https://socket.io/) | Provides low-latency, bi-directional data exchange mechanism between the backend application and clients | Easy-to-use APIs that cover most real-time communication scenarios
+|Web PubSub for Socket.IO | Host WebSocket or poll-based persistent connections with Socket.IO clients | 100 K concurrent connections built-in; Simplify application architecture;
++
+## Prerequisites
+In order to follow the step-by-step guide, you need
+> [!div class="checklist"]
+> * An [Azure](https://portal.azure.com/) account. If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+> * [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources.
+> * Basic familiarity of [Socket.IO's APIs](https://socket.io/docs/v4/)
+
+## Create a Web PubSub for Socket.IO resource
+We are going to use Azure CLI to create the resource.
+```bash
+az webpubsub create -n <resource-name> \
+ -l <resource-location> \
+ -g <resource-group> \
+ --kind SocketIO \
+ --sku Free_F1
+```
+## Get connection string
+A connection string allows you to connect with Web PubSub for Socket.IO. Keep the returned connection string somewhere for use as we need it when we run the application at the end of the tutorial.
+```bash
+az webpubsub key show -n <resource-name> \
+ -g <resource-group> \
+ --query primaryKey \
+ -o tsv
+```
+
+## Write the application
+>[!NOTE]
+> This tutorial focuses on explaining the core code for implementing real-time communication. Complete code can be found in the [samples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream).
+
+### Server-side code
+#### Build an HTTP server
+1. Create a Node.js project
+ ```bash
+ mkdir codestream
+ cd codestream
+ npm init
+ ```
+
+2. Install server SDK and Express
+ ```bash
+ npm install @azure/web-pubsub-socket.io
+ npm install express
+ ```
+
+3. Import required packages and create an HTTP server to serve static files
+ ```javascript
+ /* server.js*/
+
+ // Import required packages
+ const express = require('express');
+ const path = require('path');
+
+ // Create a HTTP server based on Express
+ const app = express();
+ const server = require('http').createServer(app);
+
+ app.use(express.static(path.join(__dirname, 'public')));
+ ```
+
+4. Define an endpoint called `/negotiate`. A **writer** client hits this endpoint first. This endpoint returns an HTTP response, which contains
+- an endpoint the client should establish a persistent connection with,
+- `room` the client is assigned to
+
+ ```javascript
+ /* server.js*/
+ app.get('/negotiate', async (req, res) => {
+ res.json({
+ url: endpoint
+ room_id: Math.random().toString(36).slice(2, 7),
+ });
+ });
+
+ // Make the Socket.IO server listen on port 3000
+ io.httpServer.listen(3000, () => {
+ console.log('Visit http://localhost:%d', 3000);
+ });
+ ```
+
+#### Create Web PubSub for Socket.IO server
+1. Import Web PubSub for Socket.IO SDK and define options
+ ```javascript
+ /* server.js*/
+ const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io");
+
+ const wpsOptions = {
+ hub: "codestream",
+ connectionString: process.argv[2]
+ };
+ ```
+
+2. Create a Web PubSub for Socket.IO server
+ ```javascript
+ /* server.js*/
+
+ const io = require("socket.io")();
+ useAzureSocketIO(io, wpsOptions);
+ ```
+
+The two steps are slightly different than how you would normally create a Socket.IO server as [described here](https://socket.io/docs/v4/server-installation/). With these two steps, your server-side code can offload managing persistent connections to an Azure service. With the help of an Azure service, your application server acts **only** as a lightweight HTTP server.
+
+Now that we've created a Socket.IO server hosted by Web PubSub, we can define how the clients and server communicate using Socket.IO's APIs. This process is referred to as implementing business logic.
+
+#### Implement business logic
+1. After a client is connected, the application server tells the client that "you are logged in" by sending a custom event named `login`.
+
+ ```javascript
+ /* server.js*/
+ io.on('connection', socket => {
+ socket.emit("login");
+ });
+ ```
+
+2. Each client emits two events `joinRoom` and `sendToRoom` that the server can respond to. After the server getting the `room_id` a client wishes to join, we use Socket.IO's API `socket.join` to join the target client to the specified room.
+
+ ```javascript
+ /* server.js*/
+ socket.on('joinRoom', async (message) => {
+ const room_id = message["room_id"];
+ await socket.join(room_id);
+ });
+ ```
+
+3. After a client has successfully been joined, the server informs the client of the successful result with the `message` event. Upon receiving an `message` event with a type of `ackJoinRoom`, the client can ask the server to send the latest editor state.
+
+ ```javascript
+ /* server.js*/
+ socket.on('joinRoom', async (message) => {
+ // ...
+ socket.emit("message", {
+ type: "ackJoinRoom",
+ success: true
+ })
+ });
+ ```
+
+ ```javascript
+ /* client.js*/
+ socket.on("message", (message) => {
+ let data = message;
+ if (data.type === 'ackJoinRoom' && data.success) {
+ sendToRoom(socket, `${room_id}-control`, { data: 'sync'});
+ }
+ // ...
+ })
+ ```
+
+4. When a client sends `sendToRoom` event to the server, the server broadcasts the **changes to the code editor state** to the specified room. All clients in the room can now receive the latest update.
+
+ ```javascript
+ socket.on('sendToRoom', (message) => {
+ const room_id = message["room_id"]
+ const data = message["data"]
+
+ socket.broadcast.to(room_id).emit("message", {
+ type: "editorMessage",
+ data: data
+ });
+ });
+ ```
+
+Now that the server-side is finished. Next, we work on the client-side.
+
+### Client-side code
+#### Initial setup
+1. On the client side, we need to create an Socket.IO client to communicate with the server. The question is which server the client should establish a persistent connection with. Since we use Web PubSub for Socket.IO, the server is an Azure service. Recall that we defined [`/negotiate`](#build-an-http-server) route to serve clients an endpoint to Web PubSub for Socket.IO.
+
+ ```javascript
+ /*client.js*/
+
+ async function initialize(url) {
+ let data = await fetch(url).json()
+
+ updateStreamId(data.room_id);
+
+ let editor = createEditor(...); // Create a editor component
+
+ var socket = io(data.url, {
+ path: "/clients/socketio/hubs/codestream",
+ });
+
+ return [socket, editor, data.room_id];
+ }
+ ```
+The `initialize(url)` organizes a few setup operations together.
+- fetches the endpoint to an Azure service from your HTTP server,
+- creates a Monoca editor instance,
+- establishes a persistent connection with Web PubSub for Socket.IO
+
+#### Writer client
+[As mentioned earlier](#the-finished-app), we have two user roles on the client side. The first one is the writer and another one is viewer. Anything written by the writer is streamed to the viewer's screen.
+
+##### Writer client
+1. Get the endpoint to Web PubSub for Socket.IO and the `room_id`.
+ ```javascript
+ /*client.js*/
+
+ let [socket, editor, room_id] = await initialize('/negotiate');
+ ```
+
+2. When the writer client is connected with server, the server sends a `login` event to him. The writer can respond by asking the server to join itself to a specified room. Importantly, every 200 ms the writer sends its latest editor state to the room. A function aptly named `flush` organizes the sending logic.
+
+ ```javascript
+ /*client.js*/
+
+ socket.on("login", () => {
+ updateStatus('Connected');
+ joinRoom(socket, `${room_id}`);
+ setInterval(() => flush(), 200);
+ // Update editor content
+ // ...
+ });
+ ```
+
+3. If a writer doesn't make any edits, `flush()` does nothing and simply returns. Otherwise, the **changes to the editor state** are sent to the room.
+ ```javascript
+ /*client.js*/
+
+ function flush() {
+ // No change from editor need to be flushed
+ if (changes.length === 0) return;
+
+ // Broadcast the changes made to editor content
+ sendToRoom(socket, room_id, {
+ type: 'delta',
+ changes: changes
+ version: version++,
+ });
+
+ changes = [];
+ content = editor.getValue();
+ }
+ ```
+
+4. When a new viewer client is connected, the viewer needs to get the latest **complete state** of the editor. To achieve this, a message containing `sync` data will be sent to the writer client, asking the writer client to send the complete editor state.
+ ```javascript
+ /*client.js*/
+
+ socket.on("message", (message) => {
+ let data = message.data;
+ if (data.data === 'sync') {
+ // Broadcast the full content of the editor to the room
+ sendToRoom(socket, room_id, {
+ type: 'full',
+ content: content
+ version: version,
+ });
+ }
+ });
+ ```
+
+##### Viewer client
+1. Same with the writer client, the viewer client creates its Socket.IO client through `initialize()`. When the viewer client is connected and received a `login` event from server, it asks the server to join itself to the specified room. The query `room_id` specifies the room .
+
+ ```javascript
+ /*client.js*/
+
+ let [socket, editor] = await initialize(`/register?room_id=${room_id}`)
+ socket.on("login", () => {
+ updateStatus('Connected');
+ joinRoom(socket, `${room_id}`);
+ });
+ ```
+
+2. When a viewer client receives a `message` event from server and the data type is `ackJoinRoom`, the viewer client asks the writer client in the room to send over the complete editor state.
+
+ ```javascript
+ /*client.js*/
+
+ socket.on("message", (message) => {
+ let data = message;
+ // Ensures the viewer client is connected
+ if (data.type === 'ackJoinRoom' && data.success) {
+ sendToRoom(socket, `${id}`, { data: 'sync'});
+ }
+ else //...
+ });
+ ```
+
+3. If the data type is `editorMessage`, the viewer client **updates the editor** according to its actual content.
+
+ ```javascript
+ /*client.js*/
+
+ socket.on("message", (message) => {
+ ...
+ else
+ if (data.type === 'editorMessage') {
+ switch (data.data.type) {
+ case 'delta':
+ // ... Let editor component update its status
+ break;
+ case 'full':
+ // ... Let editor component update its status
+ break;
+ }
+ }
+ });
+ ```
+
+4. Implement `joinRoom()` and `sendToRoom()` using Socket.IO's APIs
+ ```javascript
+ /*client.js*/
+
+ function joinRoom(socket, room_id) {
+ socket.emit("joinRoom", {
+ room_id: room_id,
+ });
+ }
+
+ function sendToRoom(socket, room_id, data) {
+ socket.emit("sendToRoom", {
+ room_id: room_id,
+ data: data
+ });
+ }
+ ```
+
+## Run the application
+### Locate the repo
+We dived deep into the core logic related to synchronizing editor state between viewers and writer. The complete code can be found in [ examples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream).
+
+### Clone the repo
+You can clone the repo and run `npm install` to install project dependencies.
+
+### Start the server
+```bash
+node index.js <web-pubsub-connection-string>
+```
+> [!NOTE]
+> This is the connection string you received from [a previous step](#get-connection-string).
+
+### Play with the real-time code editor
+Open `http://localhost:3000` in a browser tab and open another tab with the url displayed in the first web page.
+
+If you write code in the first tab, you should see your typing reflected real-time in the other tab. Web PubSub for Socket.IO facilitates message passing in the cloud. Your `express` server only serves the static `https://docsupdatetracker.net/index.html` and `/negotiate` endpoint.
azure-web-pubsub Socketio Migrate From Self Hosted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-migrate-from-self-hosted.md
+
+ Title: How to migrate a self-hosted Socket.IO to be fully managed on Azure
+description: A tutorial showing how to migrate an Socket.IO chat app to Azure
++ Last updated : 07/21/2023++++
+# How to migrate a self-hosted Socket.IO app to be fully managed on Azure
+>[!NOTE]
+> Web PubSub for Socket.IO is in "Private Preview" and is available to selected customers only. To register your interest, please write to us awps@microsoft.com.
+
+## Prerequisites
+> [!div class="checklist"]
+> * An Azure account with an active subscription. If you don't have one, you can [create a free accout](https://azure.microsoft.com/free/).
+> * Some familiarity with Socket.IO library.
+
+## Create a Web PubSub for Socket.IO resource
+Head over to Azure portal and search for `socket.io`.
+
+## Migrate an official Socket.IO sample app
+To focus this guide to the migration process, we're going to use a sample chat app provided on [Socket.IO's website](https://github.com/socketio/socket.io/tree/4.6.2/examples/chat). We need to make some minor changes to both the **server-side** and **client-side** code to complete the migration.
+
+### Server side
+Locate `index.js` in the server-side code.
+
+1. Add package `@azure/web-pubsub-socket.io`
+ ```bash
+ npm install @azure/web-pubsub-socket.io
+ ```
+
+2. Import package in server code `index.js`
+ ```javascript
+ const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io");
+ ```
+
+3. Add configuration so that the server can connect with your Web PubSub for Socket.IO resource.
+ ```javascript
+ const wpsOptions = {
+ hub: "eio_hub", // The hub name can be any valid string.
+ connectionString: process.argv[2]
+ };
+ ```
+
+4. Locate in your server-side code where Socket.IO server is created and append `.useAzureSocketIO(wpsOptions)`:
+ ```javascript
+ const io = require("socket.io")();
+ useAzureSocketIO(io, wpsOptions);
+ ```
+>[!IMPORTANT]
+> `useAzureSocketIO` is an asynchronous method. Here we `await`. So you need to wrap it and related code in an asynchronous function.
+
+5. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO.
+- [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms)
+- [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms)
+- [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom)
+- [socket.leave](https://socket.io/docs/v4/server-api/#socketleaveroom)
+
+ For example, if there's code like:
+ ```javascript
+ io.on("connection", (socket) => { socket.join("room abc"); });
+ ```
+ you should update it to:
+ ```javascript
+ io.on("connection", async (socket) => { await socket.join("room abc"); });
+ ```
+
+ In this chat example, none of them are used. So no changes are needed.
+
+### Client Side
+In client-side code found in `./public/main.js`
++
+Find where Socket.IO client is created, then replace its endpoint with Azure Socket.IO endpoint and add an `path` option. You can find the endpoint to your resource on Azure portal.
+```javascript
+const socket = io("<web-pubsub-for-socketio-endpoint>", {
+ path: "/clients/socketio/hubs/eio_hub",
+});
+```
+
azure-web-pubsub Socketio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-overview.md
+
+ Title: Overview of Web PubSub for Socket.IO
+description: An overview of Web PubSub's support for the open-source Socket.IO library
++ Last updated : 07/27/2023++++
+# Overview of Web PubSub for Socket.IO
+Web PubSub for Socket.IO is a fully managed cloud service for [Socket.IO](https://socket.io/), which is a widely popular open-source library for real-time messaging between clients and server.
+
+Managing stateful and persistent connections between clients and server is often a source of frustration for Socket.IO users. The problem is more acute when there are multiple Socket.IO instances spread across servers.
+
+Web PubSub for Socket.IO removes the burden of deploying, hosting and coordinating Socket.IO instances for developers, allowing development team to focus on building real-time experiences using their familiar APIs provided by Socket.IO library.
++
+## Benefits over hosting Socket.IO app yourself
+>[!NOTE]
+> - **Socket.IO** refers to the open-source library.
+> - **Web PubSub for Socket.IO** refers to a fully managed Azure service.
+
+| / | Hosting Socket.IO app yourself | Using Web PubSub for Socket.IO|
+||||
+| Deployment | Customer managed | Azure managed |
+| Hosting | Customer needs to provision enough server resources to serve and maintain persistent connections | Azure managed |
+| Scaling connections | Customer managed by using a server-side component called ["adapter"](https://socket.io/docs/v4/adapter/) | Azure managed with **100k+** client connections out-of-the-box |
+| Uptime guarantee | Customer managed | Azure managed with **99.9%+** uptime |
+| Enterprise-grade security | Customer managed | Azure managed |
+| Ticket support system | N/A | Azure managed |
+
+When you host Socket.IO app yourself, clients establish WebSocket or long-polling connections directly with your server. Maintaining such **stateful** connections places a heavy burden to your Socket.IO server, which limits the number of concurrent connections and increases messaging latency.
+
+A common approach to meeting the concurrent and latency challenge is to [scale out to multiple Socket.IO servers](https://socket.io/docs/v4/adapter/). Scaling out requires a server-side component called "adapter" like the Redis adapter provided by Socket.IO library. However, such adapter introduces an extra component you need to deploy and manage on top of writing extra code logic to get things to work properly.
++
+With Web PubSub for Socket.IO, you're freed from handling scaling issues and implementing code logic related to using an adapter.
+
+## Same programming model
+To migrate a self-hosted Socket.IO app to Azure, you only need to add a few lines of code with **no need** to change the rest of the application code. In other words, the programming model remains the same and the complexity of managing a real-time app is reduced.
+
+> [!div class="nextstepaction"]
+> [Quickstart for Socket.IO users](./socketio-quickstart.md)
+>
+> [Quickstart: Mirgrate an self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md)
+++++++++++
azure-web-pubsub Socketio Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-quickstart.md
+
+ Title: Quick start of Web PubBub for Socket.IO
+description: A quickstart demonstrating how to use Web PubSub for Socket.IO
++ Last updated : 08/01/2023+++
+# Quickstart for Socket.IO users
+
+This quickstart is aimed for existing Socket.IO users. It demontrates how quickly Socket.IO users can incorporate Web PubSub for Socket.IO in their app to simplify development, speed up deployment and achieve scalability without complexity.
+
+## Prerequisites
+> [!div class="checklist"]
+> * An Azure account with an active subscription. If you don't have one, you can [create a free accout](https://azure.microsoft.com/free/).
+> * Some familiarity with Socket.IO library.
+
+## Create a Web PubSub for Socket.IO resource
+Head over to Azure portal and search for `socket.io`.
+
+## Initialize a Node project and install required packages
+```bash
+mkdir quickstart
+cd quickstart
+npm init
+npm install @azure/web-pubsub-socket.io socket.io-client
+```
+
+## Write server code
+1. Import required packages and create a configuration for Web PubSub
+ ```javascript
+ /* server.js */
+ const { Server } = require("socket.io");
+ const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io");
+
+ // Add a Web PubSub Option
+ const wpsOptions = {
+ hub: "eio_hub", // The hub name can be any valid string.
+ connectionString: process.argv[2] || process.env.WebPubSubConnectionString
+ }
+ ```
+
+2. Create a Socket.IO server supported by Web PubSub for Socket.IO
+ ```javascript
+ /* server.js */
+ let io = new Server(3000);
+ useAzureSocketIO(io, wpsOptions);
+ ```
+
+3. Write server logic
+ ```javascript
+ /* server.js */
+ io.on("connection", (socket) => {
+ // send a message to the client
+ socket.emit("hello", "world");
+
+ // receive a message from the client
+ socket.on("howdy", (arg) => {
+ console.log(arg); // prints "stranger"
+ })
+ });
+ ```
+
+## Write client code
+1. Create a Socket.IO client
+ ```javascript
+ /* client.js */
+ const io = require("socket.io-client");
+
+ const webPubSubEndpoint = process.argv[2] || "<web-pubsub-socketio-endpoint>";
+ const socket = io(webPubSubEndpoint, {
+ path: "/clients/socketio/hubs/eio_hub",
+ });
+ ```
+
+2. Define the client behavior
+ ```javascript
+ /* client.js */
+
+ // Receives a message from the server
+ socket.on("hello", (arg) => {
+ console.log(arg);
+ });
+
+ // Sends a message to the server
+ socket.emit("howdy", "stranger")
+ ```
+
+## Run the app
+1. Run the server app
+ ```bash
+ node server.js "<web-pubsub-connection-string>"
+ ```
+
+2. Run the client app in another terminal
+ ```bash
+ node client.js "<web-pubsub-endpoint>"
+ ```
+
+Note: Code shown in this quickstart is in CommonJS. If you'd like to use ES Module, please refer to [quickstart-esm](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/chat).
azure-web-pubsub Socketio Service Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-service-internal.md
+
+ Title: Service internal - how does Web PubSub support Socket.IO library
+description: An article explaining how Web PubSub supports Socket.IO library
++ Last updated : 08/1/2023++++
+# Service internal - how does Web PubSub support Socket.IO library
+
+> [!NOTE]
+> This article peels back the curtain from an engieerning perspective of how self-hosted Socket.IO apps can migrate to Azure with minimal code change to simplify app architecture and deployment, while achieving 100 K+ concurrent connections out-of-the-box. It's not necessary to understand everything in this article to use Web PubSub for Socket.IO effectively.
+
+## A typical architecture of a self-hosted Socket.IO app
+
+The diagram shows a typical architecture of a self-hosted Socket.IO app. To ensure that an app is scalable and reliable, Socket.IO users often have an architecture involving multiple Socket.IO servers. Client connections are distributed among Socket.IO servers to balance load on the system. A setup of multiple Socket.IO servers introduces the challenge when developers need to send the same message to clients connected to different server. This use case is often referred to as "broadcasting messages" by developers.
+
+The official recommendation from Soket.IO library is to introduce a server-side component called ["adapter"](https://socket.io/docs/v4/using-multiple-nodes/)to coordinate Socket.IO servers. What an adapter does is to figure out which servers clients are connected to and instruct those servers to send messages.
+
+Adding an adapter component introduces complexity to both development and deployment. For example, if the [Redis adapter](https://socket.io/docs/v4/redis-adapter/) is used, it means developers need to
+- implement sticky session
+- deploy and maintain Redis instance(s)
+
+The engineering effort and time of getting a real-time communication channel in place distracts developers from working on features that make an app or system unique and valuable to end users.
+
+## What Web PubSub for Socket.IO aims to solve for developers
+Although setting up a reliable and scalable app built with Socket.IO library is often reported as challenging by developers, developers **enjoy** the intuitive APIs offered and the wide range of clients the library supports. Web PubSub for Socket.IO builds on the values the library brings, while relieving developers the complexity of managing persistent connections reliably and at scale.
+
+In practice, developers can continue using the APIs offered by Socket.IO library, but don't need to provision server resources to maintain WebSocket or long-polling based connections, which can be resource intensive. Also, developers don't need to manage and deploy an "adapter" component. The app server only needs to send a **single** operation and the Web PubSub for Socket.IO broadcasts the messages to relevant clients.
+
+## How does it work under the hood?
+Web PubSub for Socket.IO builds upon Socket.IO protocols by implementing the Adapter and Engine.IO. The diagram describes the typical architecture when you use the Web PubSub for Socket.IO with your Socket.IO server.
++
+Like a self-hosted Socket.IO app, you still need to host your Socket.IO application logic on your own server. However, with Web PubSub for Socket.IO**(the service)**, your server no longer manages client connections directly.
+- **Your clients** establish persistent connections with the service, which we call "client connections".
+- **Your servers** also establish persistent connections with the service, which we call "server connections".
+
+When your server logic uses `send to client`, `broadcast`, and `add client to rooms`, these operations are sent to the service through established server connection. Messages from your server are translated to Socket.IO operations that Socket.IO clients can understand. As a result, any existing Socket.IO implementation can work without modification. The only modification needed is to change the endpoint your clients connect to. Refer to this article of [how to migrate a self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md).
+
+When a client connects to the service, the service
+- forwards Engine.IO connection `connect` to the server
+- handles transport upgrade of client connections
+- forwards all Socket.IO messages to server
+
azure-web-pubsub Socketio Supported Server Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-supported-server-apis.md
+
+ Title: Supported server APIs of Socket.IO
+description: An article listing out Socket.IO server APIs that are partially supported or unsupported by Web PubSub for Socekt.IO
++ Last updated : 07/27/2023+++
+# Server APIs supported by Web PubSub for Socket.IO
+
+Socket.IO library provides a set of [server API](https://socket.io/docs/v4/server-api/).
+Note the following server APIs that are partially supported or unsupported by Web PubSub for Socket.IO.
+
+| Server API | Support |
+|--|-|
+| [fetchSockets](https://socket.io/docs/v4/server-api/#serverfetchsockets) | Local only |
+| [serverSideEmit](https://socket.io/docs/v4/server-api/#serverserversideemiteventname-args) | Unsupported |
+| [serverSideEmitWithAck](https://socket.io/docs/v4/server-api/#serverserversideemitwithackeventname-args) | Unsupported |
+
+Apart from the mentioned server APIs, all other server APIs from Socket.IO are fully supported.
azure-web-pubsub Socketio Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-common-issues.md
+
+ Title: How to troubleshoot Socket.IO common issues
+description: Learn how to troubleshoot Socket.IO common issues
++ Last updated : 08/01/2023+++
+# Troubleshooting for common issues
+
+Web PubSub for Socket.IO builds on Socket.IO library. When you use this Azure service, issues may lie with Socket.IO library itself or the service.
+
+## Issues with Socket.IO library
+
+To determine if the issues are with Socket.IO library, you can isolate it by temporarily removing Web PubSub for Socket.IO from your application. If the application works as expected after the removal, the root cause is probably with the Azure service.
+
+If you suspect the issues are with Socket.IO library, refer to [Socket.IO library's documentation](https://socket.io/docs/v4/troubleshooting-connection-issues/) for common connection issues.
+
+## Issues with Web PubSub for Socket.IO
+If you suspect that the issues are with the Azure service after investigation, take a look at the list of common issues.
+
+Additionally, you can [enable logging on the server side](./socketio-troubleshoot-logging.md#server-side) to examine closely the behavior of your Socket.IO app, if none of the listed issues helps.
+
+### Server side
+
+#### `useAzureSocketIO is not a function`
+##### Possible error
+- `TypeError: (intermediate value).useAzureSocketIO is not a function`
+
+##### Root cause
+If you use TypeScript in your project, you may observe this error. It's due to the improper package import.
+
+```typescript
+// Bad example
+import * as wpsExt from "@azure/web-pubsub-socket.io"
+```
+If a package isn't used or referenced after importing, the default behavior of TypeScript compiler is not to emit the package in the compiled `.js` file.
+
+##### Solution
+Use `import "@azure/web-pubsub-socket.io"`, instead. This import statement forces TypeScript compiler to include a package in the compiled `.js` file even if the package isn't referenced anywhere in the source code. [Read more](https://github.com/Microsoft/TypeScript/wiki/FAQ#why-are-imports-being-elided-in-my-emit)about this frequently asked question from the TypeScript community.
+```typescript
+// Good example.
+// It forces TypeScript to include the package in compiled `.js` file.
+import "@azure/web-pubsub-socket.io"
+```
+
+### Client side
+
+#### `404 Not Found in client side with AWPS endpoint`
+##### Possible Error
+ `GET <web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found
+
+##### Root cause
+Socket.IO client is created without a correct `path` option.
+```javascript
+// Bad example
+const socket = io(endpoint)
+```
+
+##### Solution
+Add the correct `path` option with value `/clients/socketio/hubs/eio_hub`
+```javascript
+// Good example
+const socket = io(endpoint, {
+ path: "/clients/socketio/hubs/eio_hub",
+});
+```
+
+#### `404 Not Found in client side with non-AWPS endpoint`
+
+##### Possible Error
+ `GET <non-web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found
+
+##### Root cause
+Socket.IO client is created without correct Web PubSub for Socket.IO endpoint. For example,
+
+```javascript
+// Bad example.
+// This example uses the original Socket.IO server endpoint.
+const endpoint = "socketio-server.com";
+const socket = io(endpoint, {
+ path: "/clients/socketio/hubs/<Your hub name>",
+});
+```
+
+When you use Web PubSub for Socket.IO, your clients establish connections with an Azure service. When creating a Socket.IO client, you need use the endpoint to your Web PubSub for Socket.IO resource.
+
+##### Solution
+Let Socket.IO client use the endpoint of your Web PubSub for Socket.IO resource.
+
+```javascript
+// Good example.
+const webPubSubEndpoint = "<web-pubsub-endpoint>";
+const socket = io(webPubSubEndpoint, {
+ path: "/clients/socketio/hubs/<Your hub name>",
+});
+```
azure-web-pubsub Socketio Troubleshoot Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-logging.md
+
+ Title: How to collect logs in Azure Socket.IO
+description: This article explains how to collect logs when using Web PubSub for Socket.IO
++ Last updated : 08/01/2023+++
+# How to collect logs using Web PubSub for Socket.IO
+
+Like when you self-host Socket.IO library, you can collect logs on both the server and client side when you use Web PubSub for Socket.IO.
+
+## Server-side
+On the server-side, two utilities are included that provide debugging
+capabilities.
+- [DEBUG](https://github.com/debug-js/debug), which is used by Socket.IO library and extension library provided by Web PubSub for certain logging.
+- [@azure/logger](https://www.npmjs.com/package/@azure/logger), which provides more low-level network-related logging. Conveniently, it also allows you to set a log level.
+
+### `DEBUG` JavaScript utility
+
+#### Logs all debug information
+```bash
+DEBUG=* node yourfile.js
+```
+
+#### Logs debug information of specific packages.
+```bash
+# Logs debug information of "socket.io" package
+DEBUG=socket.io:* node yourfile.js
+
+# Logs debug information of "engine.io" package
+DEBUG=engine:* node yourfile.js
+
+# Logs debug information of extention library "wps-sio-ext" provided by Web PubSub
+DEBUG=wps-sio-ext:* node yourfile.js
+
+# Logs debug information of mulitple packages
+DEBUG=engine:*,socket.io:*,wps-sio-ext:* node yourfile.js
+```
+
+### `@azure/logger` utility
+You can enable logging from this utility to get more low-level network-related information by setting the environmental variable `AZURE_LOG_LEVEL`.
+
+```bash
+AZURE_LOG_LEVEL=verbose node yourfile.js
+```
+
+`Azure_LOG_LEVEL` has four levels: `verbose`, `info`, `warning` and `error`.
+
+## Client side
+Using Web PubSub for Socket.IO doesn't change how you debug Socket.IO library. [Refer to the documentation](https://socket.io/docs/v4/logging-and-debugging/) from Socket.IO library.
+
+### Debug Socket.IO client in Node
+```bash
+# Logs all debug information
+DEBUG=* node yourfile.js
+
+# Logs debug information from "socket.io-client" package
+DEBUG=socket.io-client:* node yourfile.js
+
+# Logs debug information from "engine.io-client" package
+DEBUG=engine.io-client:* node yourfile.js
+
+# Logs debug information from multiple packages
+DEBUG=socket.io-client:*,engine.io-client* node yourfile.js
+```
+
+### Debug Socket.IO client in browser
+In browser, use `localStorage.debug = '<scope>'`.
+
+```bash
+# Logs all debug information
+localStorage.debug = '*';
+
+# Logs debug information from "socket.io-client" package
+localStorage.debug = 'socket.io-client';
+```
backup Active Directory Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/active-directory-backup-restore.md
Backing up Active Directory, and ensuring successful restores in cases of corruption, compromise or disaster is a critical part of Active Directory maintenance.
-This article outlines the proper procedures for backing up and restoring Active Directory domain controllers with Azure Backup, whether they're Azure virtual machines or on-premises servers. It discusses a scenario where you need to restore an entire domain controller to its state at the time of backup. To see which restore scenario is appropriate for you, see [this article](/windows-server/identity/ad-ds/manage/ad-forest-recovery-determine-how-to-recover).
+This article outlines the proper procedures for backing up and restoring Active Directory domain controllers with Azure Backup, whether they're Azure virtual machines or on-premises servers. It discusses a scenario where you need to restore an entire domain controller to its state at the time of backup. To see which restore scenario is appropriate for you, see [this article](/windows-server/identity/ad-ds/manage/forest-recovery-guide/ad-forest-recovery-guide).
>[!NOTE] > This article does not discuss restoring items from [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md). For information on restoring Azure Active Directory users, see [this article](../active-directory/fundamentals/active-directory-users-restore.md).
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
Title: Private endpoints for Azure Backup - Overview
description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. Previously updated : 05/24/2023 Last updated : 08/14/2023
This article describes how the [enhanced capabilities of private endpoints](#key
- While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses the use of private endpoints for Azure Backup only. -- You can create private endpoints for new Recovery Services vaults that don't have any items registered/protected to the vault, only.
+- You can create private endpoints for new Recovery Services vaults that don't have any items registered/protected to the vault, only. However, private endpoints are currently not supported for Backup vaults.
>[!Note] >You can't create private endpoints using static IP.
backup Backup Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md
Title: Back up Azure Managed Disks using Azure CLI description: Learn how to back up Azure Managed Disks using Azure CLI.-+ Previously updated : 09/17/2021 Last updated : 08/14/2023
az dataprotection backup-instance list-from-resourcegraph --datasource-type Azur
] ```
-You can specify a retention rule while triggering backup. To view the retention rules in policy, look through the policy JSON for retention rules. In the below example, the rule with the name _default_ is displayed and we'll use that rule for the on-demand backup.
+You can specify a rule and tagname while triggering backup. To view the rules in policy, look through the policy JSON. In the below example, the rule with the name BackupDaily, and tag name "default" is displayed and we'll use that rule for the on-demand backup.
```json
-{
- "isDefault": true,
- "lifecycles": [
- {
- "deleteAfter": {
- "duration": "P7D",
- "objectType": "AbsoluteDeleteOption"
+"name": "BackupDaily",
+ "objectType": "AzureBackupRule",
+ "trigger": {
+ "objectType": "ScheduleBasedTriggerContext",
+ "schedule": {
+ "repeatingTimeIntervals": [
+ "R/2022-09-27T23:30:00+00:00/P1D"
+ ],
+ "timeZone": "UTC"
},
- "sourceDataStore": {
- "dataStoreType": "OperationalStore",
- "objectType": "DataStoreInfoBase"
- }
- }
- ],
- "name": "Default",
- "objectType": "AzureRetentionRule"
+ "taggingCriteria": [
+ {
+ "criteria": null,
+ "isDefault": true,
+ "tagInfo": {
+ "eTag": null,
+ "id": "Default_",
+ "tagName": "Default"
+ },
+ "taggingPriority": 99
} ``` Trigger an on-demand backup using the [az dataprotection backup-instance adhoc-backup](/cli/azure/dataprotection/backup-instance#az-dataprotection-backup-instance-adhoc-backup) command. + ```azurecli-interactive
-az dataprotection backup-instance adhoc-backup --name "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166" --rule-name "Default" --resource-group "000pikumar" --vault-name "PratikPrivatePreviewVault1"
+az dataprotection backup-instance adhoc-backup --name "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166" --rule-name "BackupDaily" --resource-group "000pikumar" --vault-name "PratikPrivatePreviewVault1" --retention-tag-override "default"
``` ## Tracking jobs
backup Create Manage Backup Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/create-manage-backup-vault.md
Title: Create and manage Backup vaults description: Learn how to create and manage the Backup vaults. Previously updated : 07/05/2023 Last updated : 08/10/2023
The vault move across subscriptions and resource groups is supported in all publ
1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group.
- The subscription remains the same and gets auto-populated.
+ The subscription remains the same and gets auto populated.
:::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-expanded.png":::
Follow these steps:
:::image type="content" source="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png" alt-text="Screenshot shows how to monitor the postgresql restore to the secondary region." lightbox="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png":::
+## Cross Subscription Restore using Azure portal
+
+Some datasources of Backup vault support restore to a subscription different from that of the source machine. Cross Subscription Restore (CSR) is enabled for existing vaults by default, and you can use it if supported for the intended datasource.
+
+>[!Note]
+>The feature is currently not supported for Azure Kubernetes Service (AKS) and Azure VMWare Service (AVS) backup.
+
+To do Cross Subscription Restore, follow these steps:
+
+1. In the *Backup vault*, go to **Backup Instance** > **Restore**.
+1. Choose the *Subscription* to which you want to restore, and then select **Restore**.
+
+There may be instances when you need to disable Cross Subscription Restore based on your cloud infrastructure. You can enable, disable, or permanently disable Cross Subscription Restore for existing vaults by selecting *Backup vault* > **Properties** > **Cross Subscription Restore**.
++
+You can also select the state of CSR during the creation of Backup vault.
++
+>[!Note]
+>- CSR once permanently disabled on a vault can't be re-enabled because it's an irreversible operation.
+>- If CSR is disabled but not permanently disabled, then you can reverse the operation by selecting **Vault** > **Properties** > **Cross Subscription Restore** > **Enable**.
+>- If a Backup vault is moved to a different subscription when CSR is disabled or permanently disabled, restore to the original subscription will fail.
+
## Next steps - [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 05/24/2023 Last updated : 08/14/2023
This article will help you understand how private endpoints for Azure Backup wor
## Before you start -- Private endpoints can be created for new Recovery Services vaults only (that doesn't have any items registered to the vault). So private endpoints must be created before you attempt to protect any items to the vault.
+- Private endpoints can be created for new Recovery Services vaults only (that doesn't have any items registered to the vault). So private endpoints must be created before you attempt to protect any items to the vault. However, private endpoints are currently not supported for Backup vaults.
- One virtual network can contain private endpoints for multiple Recovery Services vaults. Also, one Recovery Services vault can have private endpoints for it in multiple virtual networks. However, the maximum number of private endpoints that can be created for a vault is 12. - If the public network access for the vault is set to **Allow from all networks**, the vault allows backups and restores from any machine registered to the vault. If the public network access for the vault is set to **Deny**, the vault only allows backups and restores from the machines registered to the vault that are requesting backups/restores via private IPs allocated for the vault. - A private endpoint connection for Backup uses a total of 11 private IPs in your subnet, including those used by Azure Backup for storage. This number may be higher for certain Azure regions. So we suggest that you have enough private IPs (/26) available when you attempt to create private endpoints for Backup.
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
In particular, this article highlights NC2 features.
* Operate seamlessly with on-premises Nutanix Clusters in Azure * Build and scale without constraints * Invent for today and be prepared for tomorrow with NC2 on Azure
+* Scale and flexibility that align with your needs
+ * Get scale, automation, and fast provisioning for your Nutanix workloads on global Azure infrastructure to invent with purpose.
+* Optimize your investment
+ * Keep using your existing Nutanix investments, skills, and tools to quickly increase business agility with Azure cloud services.
+* Gain cloud cost efficiencies
+ * Manage your cloud spending with license portability to significantly reduce the cost of running workloads in the cloud.
+* Modernize through the power of Azure
+ * Adapt quicker with unified data governance and gain immediate insights with transformative analytics to drive innovation.
-### Scale and flexibility that align with your needs
+### SKUs
-Get scale, automation, and fast provisioning for your Nutanix workloads on global Azure infrastructure to invent with purpose.
+We offer two SKUs: AN36 and AN36P. For specifications, see [SKUs](skus.md).
-### Optimize your investment
+### More benefits
-Keep using your existing Nutanix investments, skills, and tools to quickly increase business agility with Azure cloud services.
+* Microsoft Azure Consumption Contract (MACC) credits
-### Gain cloud cost efficiencies
+## **Azure Hybrid Benefits (AHUB) for NC2 on Azure**
-Manage your cloud spending with license portability to significantly reduce the cost of running workloads in the cloud.
+### Azure Commercial benefits
-### Modernize through the power of Azure
+**Cost Savings:** Leverage software investments to reduce costs in Azure
-Adapt quicker with unified data governance and gain immediate insights with transformative analytics to drive innovation.
+**Flexibility:** Use software commitments to run on-premises or in Azure, and shift from one to the other over time
-### SKUs
+**Unique to Azure:** Achieve savings unmatched by other cloud providers
-We offer two SKUs: AN36 and AN36P. For specifications, see [SKUs](skus.md).
+Available licensing offers are:
-### More benefits
+1. Azure Hybrid Benefit for Windows Server
+2. Azure Hybrid Benefit for SQL Server
+3. Extended Security Updates (ESU)
-* Microsoft Azure Consumption Contract (MACC) credits
+### Azure Hybrid Benefit for Windows Server
+
+- Convert or re-use Windows licensing with active software assurance in Azure for NC2 BareMetal hardware.
+- Re-use Windows Server on up to 2 VMs and up to 16 cores in Azure.
+- Run virtual machines on-premises **and** in Azure. Significantly reduce costs compared to running Windows Server in other public clouds
+
+### Azure Hybrid Benefit for SQL Server
+
+Azure-only benefit for customers with active SA (or subscriptions) on SQL cores
+
+Advantages of the hybrid benefit over license mobility when adopting IaaS are:
+
+- Use the SQL cores on-premises and in Azure simultaneously for up to 180 days, to allow for migration.
+- Available for SQL Server core licenses only.
+- No need to complete and submit license verification forms.
+- The hybrid benefit for windows and SQL can be used together for IaaS (PaaS abstracts the OS)
+
+### Extended Security Updates (ESU) ΓÇô for Windows Server**
+
+NC2 on Azure requires manual escalation to request, approve and deliver ESI keys to the client.
+
+- ESUs for deployment to the platforms below are intended to be free of charge (Azure and Azure connected), however unlike the majority of VMs on Azure today, MSFT cannot provide automatic updates. Rather, clients must request keys and install the updates themselves.
+- For regular on-premises customers ΓÇô there is no manual escalation process; these customers must work with VLSC and EA processes. To be eligible to purchase ESUs for on-premises deployment, customers must have Software Assurance.
+
+#### To request ESI keys
+
+1. Draft an email that is sent to your Microsoft Account team, The email should contain the following:
+ 1. Your contact information in the body of the email
+ 1. Customer name and TPID
+ 1. Specific Deployment Scenario: Nutanix Cloud Clusters on Azure
+ 1. Number of Servers, nodes, or both where applicable (for example,HUB) requested to be covered by ESUs
+ 1. Point of Contact: Name and email address of a customer employee who can either install or manage the keys once provided. Manage in this context means ensuring that
+ 1. Keys are not disclosed to anyone outside of the client company
+ 2. Keys are not publicly exposed
+1. Do not cc the customer at this stage. The MSFT response will include the ESU Keys and the following language:
+
+>> **Terms of Use**
+
+>> By activating this key you agree that it will only be used for only NC1 on Azure. If you violate these terms, we may stop providing services to you or we may close your Microsoft account.
+
+For any questions on Azure Hybrid Benefits, please contact your Microsoft Account Executive.
## Support
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
The following table describes the network topologies supported by each network f
| :- |::| |Connectivity to BareMetal (BM) in a local VNet| Yes | |Connectivity to BM in a peered VNet (Same region)|Yes |
-|Connectivity to BM in a peered VNet (Cross region or global peering)|No |
+|Connectivity to BM in a peered VNet\* (Cross region or global peering)\*|No |
|On-premises connectivity to Delegated Subnet via Global and Local Expressroute |Yes| |ExpressRoute (ER) FastPath |No | |Connectivity from on-premises to a BM in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
The following table describes the network topologies supported by each network f
|On-premises connectivity via Secured HUB(Az Firewall NVA) | No| |Connectivity from UVMs on NC2 nodes to Azure resources|Yes|
+\* You can overcome this limitation by setting Site-to-Site VPN.
+ ## Constraints The following table describes whatΓÇÖs supported for each network features configuration:
The following table describes whatΓÇÖs supported for each network features confi
|Delegated subnet per VNet |1| |[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No| |[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets|No|
-|Connectivity to [private endpoints](../../../private-link/private-endpoint-overview.md) from resources on Azure-delegated subnets|No|
+|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in the same Vnet on Azure-delegated subnets|No|
+|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in a different spoke Vnet connected to vWAN|Yes|
|Load balancers for NC2 on Azure traffic|No| |Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported|
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
The Azure Batch lifetime statistics API for jobs and pools will be retired on *A
## About the feature
-Currently, you can use API to retrieve lifetime statistics for [jobs](/rest/api/batchservice/job/get-all-lifetime-statistics#http) and [pools](/rest/api/batchservice/pool/get-all-lifetime-statistics#pools) in Batch. The API collects statistical data from when the Batch account was created for all jobs and pools created for the lifetime of the Batch account.
+Currently, you can use API to retrieve lifetime statistics for jobs and pools in Batch. The API collects statistical data from when the Batch account was created for all jobs and pools created for the lifetime of the Batch account.
To make statistical data available to customers, the Batch service performs aggregation and roll-ups on a periodic basis. Due to these lifetime stats APIs being rarely exercised by Batch customers, these APIs are being retired as alternatives exist.
batch Private Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/private-connectivity.md
Title: Use private endpoints with Azure Batch accounts description: Learn how to connect privately to an Azure Batch account by using private endpoints. Previously updated : 12/16/2022 Last updated : 8/14/2023
When creating a private endpoint with your Batch account, keep in mind the follo
- If a Batch account resource is moved to a different resource group or subscription, the private endpoints can still work, but the association to the Batch account breaks. If you delete the private endpoint resource, its associated private endpoint connection still exists in your Batch account. You can manually remove connection from your Batch account. - To delete the private connection, either delete the private endpoint resource, or delete the private connection in the Batch account (this action disconnects the related private endpoint resource). - DNS records in the private DNS zone aren't removed automatically when you delete a private endpoint connection from the Batch account. You must manually remove the DNS records before adding a new private endpoint linked to this private DNS zone. If you don't clean up the DNS records, unexpected access issues might happen.
+- When private endpoint is enabled for the Batch account, the [task authentication token](/rest/api/batchservice/task/add?tabs=HTTP#request-body) for Batch task is not supported. The workaround is to use [Batch pool with managed identities](managed-identity-pools.md).
## Next steps
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it. Previously updated : 04/14/2023 Last updated : 08/14/2023
Batch pools with the *classic* communication mode require the following networki
- Destination port `443` over TCP to `Storage.<region>` - Destination port `443` over TCP to `BatchNodeManagement.<region>` for certain workloads that require communication back to the Batch Service, such as Job Manager tasks
-Batch pools with the *simplified* communication mode require the following networking rules in NSGs, UDRs, and firewalls:
+Batch pools with the *simplified* communication mode only need outbound access to Batch account's node management endpoint (see [Batch account public endpoints](public-network-access.md#batch-account-public-endpoints)). They require the following networking rules in NSGs, UDRs, and firewalls:
- Inbound: - None
The following are known limitations of the simplified communication mode:
- Learn more about [pools in virtual networks](batch-virtual-network.md). - Learn how to [create a pool with specified public IP addresses](create-pool-public-ip.md). - Learn how to [create a pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
+- Learn how to [configure public network access for Batch accounts](public-network-access.md).
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Previously updated : 12/16/2022 Last updated : 8/14/2023
To restrict access to these nodes and reduce the discoverability of these nodes
1. Pools without public IP addresses must use Virtual Machine Configuration and not Cloud Services Configuration. 1. [Custom endpoint configuration](pool-endpoint-configuration.md) for Batch compute nodes doesn't work with pools without public IP addresses. 1. Because there are no public IP addresses, you can't [use your own specified public IP addresses](create-pool-public-ip.md) with this type of pool.
+1. The [task authentication token](/rest/api/batchservice/task/add?tabs=HTTP#request-body) for Batch task is not supported. The workaround is to use [Batch pool with managed identities](managed-identity-pools.md).
## Create a pool without public IP addresses in the Azure portal
cdn Cdn Create New Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-new-endpoint.md
Title: Quickstart - Create an Azure CDN profile and endpoint+ description: This quickstart shows how to enable Azure CDN by creating a new CDN profile and CDN endpoint. ms.assetid: 4ca51224-5423-419b-98cf-89860ef516d2 Previously updated : 04/06/2022 Last updated : 08/14/2023 - + # Quickstart: Create an Azure CDN profile and endpoint In this quickstart, you enable Azure Content Delivery Network (CDN) by creating a new CDN profile, which is a collection of one or more CDN endpoints. After you've created a profile and an endpoint, you can start delivering content to your customers.
cloud-shell Quickstart Deploy Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md
the **Azure Cloud Shell - VNet storage** template.
## 2. Provision the virtual network using the ARM template
-Use the [Azure Cloud Shell - VNet][07] template to create Cloud Shell resources in a virtual
+Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual
network. The template creates three subnets under the virtual network created earlier. You may choose to change the supplied names of the subnets or use the defaults. The virtual network, along with the subnets, require valid IP address assignments.
subscription.
## 3. Provision the VNET storage using the ARM template
-Use the [Azure Cloud Shell - VNet storage][08] template to create Cloud Shell resources in a virtual
+Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual
network. The template creates the storage account and assigns it to the private VNET. The ARM template requires specific information about the resources you created earlier, along
private Cloud Shell instance.
[07]: /azure/virtual-network/virtual-network-manage-subnet?tabs=azure-portal#change-subnet-settings [08]: https://aka.ms/cloudshell/docs/vnet/template [09]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
-[10]: media/quickstart-deploy-vnet/setup-cloud-shell-storage.png
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/sending-requests.md
The **Bing** APIs support search actions that return results according to their
All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Autosuggest API, see [Autosuggest Quickstarts](/azure/cognitive-services/bing-autosuggest).
+For examples of basic requests using the Autosuggest API, see [Autosuggest Quickstarts](/azure/cognitive-services/bing-autosuggest/get-suggested-search-terms).
## Bing Autosuggest API requests
cognitive-services Endpoint Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/endpoint-custom.md
For information about configuring a Custom Search instance, see [Configure your
The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Custom Search API, see [Custom Search Quick-starts](/azure/cognitive-services/bing-custom-search/)
+For examples of basic requests using the Custom Search API, see [Custom Search Quick-starts](/azure/cognitive-services/bing-custom-search/quick-start)
cognitive-services Endpoint News https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/endpoint-news.md
Returns news topics that are currently trending on social networks. When the `/t
For details about headers, parameters, market codes, response objects, errors, etc., see the [Bing News search API v7](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference) reference. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the News search API, see [Bing News Search Quick-starts](/azure/cognitive-services/bing-news-search).
+For examples of basic requests using the News search API, see [Bing News Search Quick-starts](/azure/cognitive-services/bing-news-search/search-the-web).
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/csharp.md
Responses from the Bing Web Search API are returned as JSON. This sample respons
}, { "name": "Computer Vision API",
- "url": "https://azure.microsoft.com/services/cognitive-services/computer-vision/",
+ "url": "https://azure.microsoft.com/products/ai-services?activetab=pivot:visiontab",
"snippet": "Extract the data you need from images using optical character recognition and image analytics with Computer Vision APIs from Microsoft Azure." }, {
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/java.md
Responses from the Bing Web Search API are returned as JSON. This sample respons
}, { "name": "Computer Vision API",
- "url": "https://azure.microsoft.com/services/cognitive-services/computer-vision/",
+ "url": "https://azure.microsoft.com/products/ai-services?activetab=pivot:visiontab",
"snippet": "Extract the data you need from images using optical character recognition and image analytics with Computer Vision APIs from Microsoft Azure." }, {
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/nodejs.md
Responses from the Bing Web Search API are returned as JSON. This sample respons
}, { "name": "Computer Vision API",
- "url": "https://azure.microsoft.com/services/cognitive-services/computer-vision/",
+ "url": "https://azure.microsoft.com/products/ai-services?activetab=pivot:visiontab",
"snippet": "Extract the data you need from images using optical character recognition and image analytics with Computer Vision APIs from Microsoft Azure." }, {
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/php.md
Responses from the Bing Web Search API are returned as JSON. This sample respons
}, { "name": "Computer Vision API",
- "url": "https://azure.microsoft.com/services/cognitive-services/computer-vision/",
+ "url": "https://azure.microsoft.com/products/ai-services?activetab=pivot:visiontab",
"snippet": "Extract the data you need from images using optical character recognition and image analytics with Computer Vision APIs from Microsoft Azure." }, {
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/ruby.md
Responses from the Bing Web Search API are returned as JSON. This sample respons
}, { "name": "Computer Vision API",
- "url": "https://azure.microsoft.com/services/cognitive-services/computer-vision/",
+ "url": "https://azure.microsoft.com/products/ai-services?activetab=pivot:visiontab",
"snippet": "Extract the data you need from images using optical character recognition and image analytics with Computer Vision APIs from Microsoft Azure." }, {
cognitive-services Tutorial Bing Web Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/tutorial-bing-web-search-single-page-app.md
[!INCLUDE [Bing move notice](../bing-web-search/includes/bing-move-notice.md)]
-This single-page app demonstrates how to retrieve, parse, and display search results from the Bing Web Search API. The tutorial uses boilerplate HTML and CSS, and focuses on the JavaScript code. HTML, CSS, and JS files are available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/bing-web-search) with quickstart instructions.
+This single-page app demonstrates how to retrieve, parse, and display search results from the Bing Web Search API. The tutorial uses boilerplate HTML and CSS, and focuses on the JavaScript code. HTML, CSS, and JS files are available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/Bing-Web-Search) with quickstart instructions.
This sample app can:
cognitive-services Web Search Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/web-search-endpoints.md
Endpoint: For details about headers, parameters, market codes, response objects,
## Response JSON
-The response to a Web search request includes all results as JSON objects. Parsing the result requires procedures that handle the elements of each type. See the [tutorial](./tutorial-bing-web-search-single-page-app.md) and [source code](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/bing-web-search) for examples.
+The response to a Web search request includes all results as JSON objects. Parsing the result requires procedures that handle the elements of each type. See the [tutorial](./tutorial-bing-web-search-single-page-app.md) and [source code](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/Bing-Web-Search) for examples.
## Next steps
communication-services Media Quality Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-quality-sdk.md
Last updated 11/30/2022
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Media quality statistics To help understand media quality in VoIP and Video calls using Azure Communication Services, we have a feature called "Media quality statistics" that you can use to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics.
-## Media quality statistics for ongoing call
-> [!NOTE]
-> This API is provided as a Public Preview ('beta') for developers and may change based on feedback that we receive. Do not use this API in a production environment.
-> [!IMPORTANT]
-> There is also an API breaking change on MediaStats in the SDK beginning since version 1.8.0-beta.1
-Media quality statistics is an extended feature of the core `Call` API. You first need to obtain the MediaStats feature API object:
-```js
-const mediaStatsFeature = call.feature(Features.MediaStats);
-```
-
-Then, define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions` if you want control over intervals. Otherwise SDK will use default values.
-
-```js
-const mediaStatsCollectorOptions: SDK.MediaStatsCollectorOptions = {
- aggregationInterval: 10,
- dataPointsPerAggregation: 6
-};
-```
-
-Where
-- `aggregationInterval` is the interval in seconds that the statistics will be aggregated. -- `dataPointsPerAggregation` defines how many data points are there for each aggregation event.-
-After adding an event listener to media stats collector, you'll receive `mediaStatsEmitted` or `summaryReported` event with stats every `aggregationInterval * dataPointsPerAggregation` seconds.
-
-Example:
-- If you set `aggregationInterval` = 1-- `dataPointsPerAggregation` = 60-
-The media stats `mediaStatsEmitted` or `summaryReported` event will be raised every 60 seconds and will contain 60 unique units for each stat recorded.
--- If you set `aggregatinInterval` = 60-- `dataPointsPerAggregation` = 1-
-The media stats `mediaStatsEmitted` or `summaryReported` event will be raised every 60 seconds and will contain 1 unique unit for each stat recorded.
-
-### SDK Version `>= 1.8.0`
-
-As a developer you can invoke the `createCollector` method with optional `mediaStatsSubscriptionOptions`.
-
-```js
-const mediaStatsCollector = mediaStatsFeature.createCollector(mediaStatsSubscriptionOptions);
-
-mediaStatsCollector.on('sampleReported', (sample) => {
- console.log('media stats sample', sample);
-});
-
-mediaStatsCollector.on('summaryReported', (summary) => {
- console.log('media stats summary', summary);
-});
-```
-To dispose media stats collector, invoke `dispose` method of `mediaStatsCollector`.
-
-```js
-mediaStatsCollector.dispose();
-```
-
-We removed `disposeAllCollectors` method. The collectors will be reclaimed when mediaStatsFeature is disposed.
-
-## Best practices
-If you want to collect this data for off-line inspection (after a call ends), it is recommended to collect this data and send it to your pipeline ingest after your call has ended. If you transmit this data during a call, it could use internet bandwidth that is needed to continue an Azure Communication Services call (especially when available bandwidth is low).
-
-## MediaStats Metrics for SDK Version `>= 1.8.0`
-
-The bandwidth metrics have changes to `availableBitrate` in Audio Send / Video Send metrics.
-
-### Audio Send metrics
-| Metric Name | Description | Comments |
-| -- | -- | -- |
-| id | stats id | It is used to identify stats across the events, especially when there are multiple stats with same media type and direction in an event. |
-| codecName | codec name | OPUS, G722|
-| bitrate | audio send bitrate (bps) | General values are in the 24 kbps range (36-128 kbps typical) |
-| jitterInMs | packet jitter (milliseconds) | Lower is better. |
-| packetsPerSecond | packet rate (packets/sec) | |
-| packetsLostPerSecond | packet loss rate (packets/sec) | Lower is better. |
-| rttInMs | round-trip time (milliseconds) | Lower is better. It's calculated from RTCP Receiver Report. A round trip time of 200 ms or less is recommended. |
-| pairRttInMs | round-trip time (milliseconds) | Lower is better. It's similar to rttInMS but is calculated from STUN connectivity check. A round trip time of 200 ms or less is recommended. |
-| availableBitrate | bandwidth estimation (bps) | |
-| audioInputLevel | audio volume level from microphone | The value ranges from 0-65536. 0 represents silence |
-
-### Audio Receive metrics
-| Metric Name | Description | Comments |
-| -- | -- | -- |
-| id | stats id | It is used to identify stats across the events, especially when there are multiple stats with same media type and direction in an event. |
-| codecName | codec name | OPUS, G722|
-| bitrate | audio receive bitrate (bps) | General values are in the 24 kbps range (36-128 kbps typical) |
-| jitterInMs | packet jitter (milliseconds) | Lower is better. |
-| packetsPerSecond | packet rate (packets/sec) | |
-| packetsLostPerSecond | packet loss rate (packets/sec) | Lower is better. |
-| pairRttInMs | round-trip time (milliseconds) | Lower is better. It is calculated from STUN connectivity check. A round trip time of 200 ms or less is recommended. |
-| jitterBufferInMs | jitter buffer (milliseconds) | Lower is better. The jitter buffer is used for smooth playout. This value is how long the packets of the samples stay in the jitter buffer. |
-| audioOutputLevel | audio volume level from receiving stream | The value ranges from 0-65536. 0 represents silence. |
-| healedRatio | ratio of concealedSamples(except silentConcealedSamples) to total received samples | Information only. |
-
-### Video Send metrics
-| Metric Name | Description | Comments |
-| -- | -- | -- |
-| id | stats id | It is used to identify stats across the events, especially when there are multiple stats with same media type and direction in an event. |
-| codecName | codec name | H264, VP8, VP9 |
-| bitrate | video send bitrate (bps) | |
-| jitterInMs | packet jitter (milliseconds) | Lower is better. |
-| packetsPerSecond | packet rate (packets/sec) | |
-| packetsLostPerSecond | packet loss rate (packets/sec) | Lower is better. |
-| rttInMs | round-trip time (milliseconds) | Lower is better. It is calculated from RTCP Receiver Report. A round trip time of 200 ms or less is recommended. |
-| pairRttInMs | round-trip time (milliseconds) | Lower is better. It is similar to rttInMS but is calculated from STUN connectivity check. A round trip time of 200 ms or less is recommended. |
-| availableBitrate | bandwidth estimation (bps) | 1.5 Mbps or higher is recommended for high-quality video for upload/download. |
-| frameRateInput | frame rate originating from the video source (frames/sec) | |
-| frameWidthInput | frame width of the last frame originating from video source (pixel) | |
-| frameHeightInput | frame height of the last frame originating from video source (pixel) | |
-| frameRateEncoded | frame rate successfully encoded for the RTP stream (frames/sec) | |
-| frameRateSent | frame rate sent on the RTP stream (frames/sec) | |
-| frameWidthSent | frame width of the encoded frame (pixel) | |
-| frameHeightSent | frame height of the encoded frame (pixel) | |
-| framesSent | frames sent on the RTP stream | |
-| framesEncoded | frames successfully encoded for the RTP stream | |
-| keyFramesEncoded | key frames successfully encoded for the RTP stream | |
-
-### Video Receive metrics
-| Metric Name | Description | Comments |
-| -- | -- | -- |
-| id | stats id | It is used to identify stats across the events, especially when there are multiple stats with same media type and direction in an event. |
-| codecName | codec name | H264, VP8, VP9 |
-| bitrate | video receive bitrate (bps) | |
-| jitterInMs | packet jitter (milliseconds) | Lower is better. |
-| packetsPerSecond | packet rate (packets/sec) | |
-| packetsLostPerSecond | packet loss rate (packets/sec) | Lower is better. |
-| pairRttInMs | round-trip time (milliseconds) | Lower is better. A round trip time of 200 ms or less is recommended. |
-| jitterBufferInMs | jitter buffer (milliseconds) | Lower is better. The jitter buffer is used for smooth playout. This value is how long the packets of the frame stay in the jitter buffer. |
-| streamId | stream id | The streamId value corresponds to id in VideoStreamCommon. It can be used to match the sender. |
-| frameRateOutput | frame rate output (frames/sec) | |
-| frameRateDecoded | frame rate correctly decoded for the RTP stream (frames/sec) | |
-| frameRateReceived | frame rate received on the RTP stream (frames/sec) | |
-| frameWidthReceived | frame width of the decoded frame (pixel) | |
-| frameHeightReceived | frame height of the decoded frame (pixel) | |
-| longestFreezeDurationInMs | longest freeze duration (milliseconds) | |
-| totalFreezeDurationInMs | total freeze duration (milliseconds) | |
-| framesReceived | total number of frames received on the RTP stream | |
-| framesDecoded | total number of frames correctly decoded for the RTP stream | |
-| framesDropped | total number of frames dropped | |
-| keyFramesDecoded | total number of key frames correctly decoded for the RTP stream | |
-
-### ScreenShare Send metrics
-Currently stats fields are the same as *Video Send metrics*
-
-### ScreenShare Receive metrics
-Currently stats fields are the same as *Video Receive metrics*
-
-### Using Media Quality Statistics on SDK Version `< 1.8.0` or older
-If you are using an ACS SDK version older than 1.8.0, please see below for documentation on how to use this functionality.
-
-As a developer you can invoke the `startCollector` method with optional `mediaStatsSubscriptionOptions`.
-
-```js
-const mediaStatsCollector = mediaStatsFeature.startCollector(mediaStatsSubscriptionOptions);
-
-mediaStatsCollector.on('mediaStatsEmitted', (mediaStats) => {
- console.log('media stats:', mediaStats.stats);
- console.log('media stats collectionInterval:', mediaStats.collectionInterval);
- console.log('media stats aggregationInterval:', mediaStats.aggregationInterval);
-});
-```
-
-To dispose media stats collector, invoke `dispose` method of `mediaStatsCollector`.
-
-```js
-mediaStatsCollector.dispose();
-```
-
-To dispose all collectors, invoke `disposeAllCollectors` method of `mediaStatsApi`.
-
-```js
-mediaStatsFeature.disposeAllCollectors();
-```
-
-### Bandwidth metrics
-| Metric Name | Purpose | Detailed explanation | Comments |
-| -- | -- | -- | -- |
-| SentBWEstimate | Bandwidth estimation | Average video bandwidth allocated for the channel bps (bits per second) | 1.5 Mbps or higher is recommended for high-quality video for upload/download. |
--
-### Audio quality metrics
-| Metric Name | Purpose | Details | Comments |
-| - | - | - | |
-| audioSendBitrate | Sent bitrate | Send bitrate of audio (bits per second) | General values are in the 24 kbps range (36-128 kbps typical) |
-| audioRecvBitrate | Received audio bitrate | Received bitrate of audio received (bits per second) | |
-| audioSendPackets | Sent packets | The number of audio packets sent in last second (packets per second) | |
-| audioRecvJitterBufferMs | Jitter buffer delay | The jitter buffer is used for smooth playout. This value is how long the packets of the samples stay in the jitter buffer. (in milliseconds (ms)) | Lower is better. |
-| audioRecvPacketsLost | Received packet loss | The number of audio packets that were to be received but were lost. Results are packets per second (over the last second). | Lower is better. |
-| audioSendPacketsLost | Sent packet loss | The number of audio packets sent that were lost (not received) in the last second. Results are packets per second (over the last second). | Lower is better. |
-| audioRecvPackets | Received packets | The number of audio packets received in the last second. Results are packets per second (over the last second). | Information only. |
-| audioSendCodecName | Sent codec | Audio codec used. | Information only. |
-| audioSendRtt | Send Round-Trip Time | Round trip time between your system and Azure Communication Services server. Results are in milliseconds (ms). | A round trip time of 200 ms or less is recommended. |
-| audioSendPairRtt | Send Pair Round-Trip Time | Round trip time for entire transport. Results are in milliseconds (ms). | A round trip time of 200 ms or less is recommended. |
-| audioRecvPairRtt | Receive Pair Round-Trip Time | Round trip time for entire transport. Results are in milliseconds (ms). | A round trip time of 200 ms or less is recommended. |
-| audioSendAudioInputLevel | Input level for microphone | Sent audio playout level. If source data is between 0-1, media stack multiplies it with 0xFFFF. Depends on microphone. Used to confirm if microphone is silent (no incoming energy). | Microphone input level. |
-| audioRecvAudioOutputLevel | Speaker output level. | Received audio playout level. If source data is between 0-1, media stack multiplies it with 0xFFFF. | Speaker output level. |
--
-### Video quality metrics
-| Metric Name | Purpose | Details | Comments |
-| | -- | - | |
-| videoSendFrameRateSent | Sent frame rate | Number of video frames sent. Results are frames per second | Higher is better:<br>25-30 fps (360p or better)<br>8-15 fps (270p or lower)<br>Frames/second<br> |
-| videoSendFrameWidthSent | Sent width | Video width resolution sent. | Higher is better. Possible values:<br>1920, 1280, 960, 640, 480, 320 |
-| videoSendFrameHeightSent | Sent height | Video height sent. Higher is better | Higher is better. Possible values:<br>1080, 720, 540, 360, 270, 240 |
-| videoSendBitrate | Sent bitrate | Amount of video bitrate being sent. Results are bps (bits per second) | |
-| videoSendPackets | Sent packets | The number of video packets sent. Results are packets per second (over the last second). | Information only |
-| VideoSendCodecName | Sent codec | Video codec used for encoding video | VP8 (1:1 calls) and H264 |
-| videoRecvJitterBufferMs | Received Jitter | The jitter buffer is used for smooth playout. This value is how long the packets of the frame stay in the jitter buffer. (in milliseconds (ms)) | Lower is better. |
-| videoSendRtt | Send Round-Trip Time | Response time between your system and Azure Communication Services server. Lower is better | A round trip time of 200 ms or less is recommended. |
-| videoSendPairRtt | Send Pair Round-Trip Time | Response time between your system and Azure Communication Services server. Results are in milliseconds (ms). | A round trip time of 200 ms or less is recommended. |
-| videoRecvPairRtt | Receive Pair Round-Trip Time | Round trip time for entire transport. Results are in milliseconds (ms). | A round trip time of 200 ms or less is recommended. |
-| videoRecvFrameRateReceived | Received frame rate | Frame rate of video currently received | 25-30 fps (360p or better)<br>8-15 fps (270p or lower) |
-| videoRecvFrameWidthReceived | Received width | Width of video currently received | 1920, 1280, 960, 640, 480, 320 |
-| videoRecvFrameHeightReceived | Received height | Height of video currently received | 1080, 720, 540, 360, 270, 240 |
-| videoRecvBitrate | Received bitrate | Bitrate of video currently received (bits per second) | Information only, |
-| videoRecvPackets | Received packets | The number of packets received in last second | Information only |
-| VideoRecvPacketsLost | Received packet loss | The number of video packets that were to be received but were lost. Results are packets per second (over the last second). | Lower is better |
-| videoSendPacketsLost | Sent packet loss | The number of audio packets that were sent but were lost. Results are packets per second (over the last second). | Lower is better |
-| videoSendFrameRateInput | Sent framerate input | Framerate measurements from the stream input into peerConnection | Information only |
-| videoRecvFrameRateDecoded | Received decoded framerate | Framerate from decoder output. This metric takes videoSendFrameRateInput as an input, might be some loss in decoding | Information only |
-| videoSendFrameWidthInput | Sent frame width input | Frame width of the stream input into peerConnection. This takes videoRecvFrameRateDecoded as an input, might be some loss in rendering. | 1920, 1280, 960, 640, 480, 320 |
-| videoSendFrameHeightInput | Sent frame height input | Frame height of the stream input into peerConnection | 1080, 720, 540, 360, 270, 240 |
-| videoRecvLongestFreezeDuration | Received longest freeze duration | How long was the longest freeze | Lower is better |
-| videoRecvTotalFreezeDuration | Received total freeze duration | Total freeze duration in seconds | Lower is better |
-
-### Screen share quality metrics
-| Metric Name | Purpose | Details | Comments |
-| -- | -- | - | - |
-| screenSharingSendFrameRateSent | Sent frame rate | Number of video frames sent. Higher is better | 1-30 FPS (content aware, variable) |
-| screenSharingSendFrameWidthSent | Sent width | Video resolution sent. Higher is better | 1920 pixels (content aware, variable) |
-| screenSharingSendFrameHeightSent | Sent height | Video resolution sent. Higher is better | 1080 pixels (content aware, variable) |
-| screenSharingSendCodecName | Sent codec | Codec used for encoding screen share | Information only |
-| screenSharingRecvFrameRateReceived | Received frame rate | Number of video frames received. Lower is better. | 1-30 FPS |
-| screenSharingRecvFrameWidthReceived | Received width | Video resolution received. Higher is better | 1920 pixels (content aware, variable) |
-| screenSharingRecvFrameHeightReceived | Received height | Video resolution sent. Higher is better | 1080 pixels (content aware, variable) |
-| screenSharingRecvCodecName | Received codec | Codec used for decoding video stream | Information only |
-| screenSharingRecvJitterBufferMs | Received Jitter | The jitter buffer is used for smooth playout. This value is how long the packets of the frame stay in the jitter buffer. (in milliseconds (ms)) | |
-| screenSharingRecvPacketsLost | Received packet loss | The number of screen share packets that were to be received but were lost. Results are packets per second (over the last second). | Lower is better |
-| screenSharingSendPacketsLost | Received packet loss | The number of screen share packets that were sent were lost. Results are packets per second (over the last second). | Lower is better |
-| screenSharingSendFrameRateInput | Sent framerate input | Framerate measurements from the stream input into peerConnection | Information only |
-| screenSharingRecvFrameRateDecoded | Received decoded framerate | Framerate from decoder output | Information only |
-| screenSharingRecvFrameRateOutput | Received framerate output | Framerate of the stream that was sent to renderer | Information only |
-| screenSharingSendFrameWidthInput | Sent frame width input | Frame width of the stream input into peerConnection | Information only |
-| screenSharingSendFrameHeightInput | Sent frame height input | Frame height of the stream input into peerConnection | Information only |
-| screenSharingRecvLongestFreezeDuration | Received longest freeze duration | How long was the longest freeze | Lower is better |
-| screenSharingRecvTotalFreezeDuration | Received total freeze duration | Total freeze duration in seconds | Lower is better |
communication-services Control Mid Call Media Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/control-mid-call-media-actions.md
Previously updated : 05/14/2023 Last updated : 08/09/2023 -+ # How to control mid-call media actions with Call Automation
->[!IMPORTANT]
->Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
->Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite).
Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available to developers during calls, like Send DTMF and Continuous DTMF Recognition. Actions are accompanied with sample code on how to invoke the said action.
As a prerequisite, we recommend you to read the below articles to make the most
For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application. ### [csharp](#tab/csharp) ```csharp
-var client = new CallAutomationClient("<resource_connection_string>");
+var callAutomationClient = new CallAutomationClient("<Azure Communication Services connection string>");
``` ### [Java](#tab/java) ```java
- CallAutomationClient client = new CallAutomationClientBuilder().connectionString("<resource_connection_string>").buildClient();
+CallAutomationClient callAutomationClient = new CallAutomationClientBuilder()
+ .connectionString("<Azure Communication Services connection string>")
+ .buildClient();
+```
+### [JavaScript](#tab/javascript)
+```javascript
+callAutomationClient = new CallAutomationClient(("<Azure Communication Services connection string>");
+```
+### [Python](#tab/python)
+```python
+call_automation_client = CallAutomationClient.from_connection_string((("<Azure Communication Services connection string>")
``` --
Send a list of DTMF tones to an external participant.
```csharp var tones = new DtmfTone[] { DtmfTone.One, DtmfTone.Two, DtmfTone.Three, DtmfTone.Pound };
-await callAutomationClient.GetCallConnection(callConnectionId)
- .GetCallMedia()
- .SendDtmfAsync(targetParticipant: tones: tones, new PhoneNumberIdentifier(c2Target), operationContext: "dtmfs-to-ivr");
+await callAutomationClient.GetCallConnection(callConnectionId)
+ .GetCallMedia()
+ .SendDtmfTonesAsync(tones, new PhoneNumberIdentifier(c2Target), "dtmfs-to-ivr");
``` ### [Java](#tab/java) ```java
-List<DtmfTone> tones = new ArrayList<DtmfTone>();
-tones.add(DtmfTone.ZERO);
-
-callAutomationClient.getCallConnectionAsync(callConnectionId)
- .getCallMediaAsync()
- .sendDtmfWithResponse(tones, new PhoneNumberIdentifier(c2Target), "dtmfs-to-ivr").block();;
+List<DtmfTone> tones = Arrays.asList(DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE, DtmfTone.POUND);
+callAutomationClient.getCallConnectionAsync(callConnectionId)
+ .getCallMediaAsync()
+ .sendDtmfTonesWithResponse(tones, new PhoneNumberIdentifier(c2Target), "dtmfs-to-ivr")
+ .block();
+```
+### [JavaScript](#tab/javascript)
+```javascript
+const tones = [DtmfTone.One, DtmfTone.Two, DtmfTone.Three];
+const sendDtmfTonesOptions: SendDtmfTonesOptions = {
+ operationContext: "dtmfs-to-ivr"
+};
+const result: SendDtmfTonesResult = await callAutomationClient.getCallConnection(callConnectionId)
+ .getCallMedia()
+ .sendDtmfTones(tones, {
+ phoneNumber: c2Target
+ }, sendDtmfTonesOptions);
+console.log("sendDtmfTones, result=%s", result);
+```
+### [Python](#tab/python)
+```python
+tones = [DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE]
+result = call_automation_client.get_call_connection(call_connection_id).send_dtmf_tones(
+ tones = tones,
+ target_participant = PhoneNumberIdentifier(c2_target),
+ operation_context = "dtmfs-to-ivr")
+app.logger.info("Send dtmf, result=%s", result)
``` --
-When your application sends these DTMF tones, you'll receive event updates. You can use the `SendDtmfCompleted` and `SendDtmfFailed` events to create business logic in your application to determine the next steps.
+When your application sends these DTMF tones, you receive event updates. You can use the `SendDtmfTonesCompleted` and `SendDtmfTonesFailed` events to create business logic in your application to determine the next steps.
-Example of *SendDtmfCompleted* event
+Example of *SendDtmfTonesCompleted* event
### [csharp](#tab/csharp) ``` csharp
-if (@event is SendDtmfCompleted completed)
-{
- logger.LogInformation("Send dtmf succeeded: context={context}",
- completed.OperationContext);
-}
+if (acsEvent is SendDtmfTonesCompleted sendDtmfCompleted)
+{
+ logger.LogInformation("Send DTMF succeeded, context={context}", sendDtmfCompleted.OperationContext);
+}
``` ### [Java](#tab/java) ``` java
-if (acsEvent instanceof SendDtmfCompleted toneReceived) {
- SendDtmfCompleted event = (SendDtmfCompleted) acsEvent;
- logger.log(Level.INFO, "Send dtmf succeeded: context=" + event.getOperationContext());
+if (acsEvent instanceof SendDtmfTonesCompleted) {
+ SendDtmfTonesCompleted event = (SendDtmfTonesCompleted) acsEvent;
+ log.info("Send dtmf succeeded: context=" + event.getOperationContext());
+}
+```
+### [JavaScript](#tab/javascript)
+```javascript
+if (event.type === "Microsoft.Communication.SendDtmfTonesCompleted") {
+ console.log("Send dtmf succeeded: context=%s", eventData.operationContext);
} ```
+### [Python](#tab/python)
+```python
+if event.type == "Microsoft.Communication.SendDtmfTonesCompleted":
+ app.logger.info("Send dtmf succeeded: context=%s", event.data['operationContext']);
+```
--
-Example of *SendDtmfFailed*
+Example of *SendDtmfTonesFailed*
### [csharp](#tab/csharp) ```csharp
-if (@event is SendDtmfFailed failed)
-{
- logger.LogInformation("Send dtmf failed: resultInfo={info}, context={context}",
- failed.ResultInformation,
- failed.OperationContext);
-}
+if (acsEvent is SendDtmfTonesFailed sendDtmfFailed)
+{
+ logger.LogInformation("Send dtmf failed: result={result}, context={context}",
+ sendDtmfFailed.ResultInformation?.Message, sendDtmfFailed.OperationContext);
+}
``` ### [Java](#tab/java) ```java
-if (acsEvent instanceof SendDtmfFailed toneReceived) {
- SendDtmfFailed event = (SendDtmfFailed) acsEvent;
- logger.log(Level.INFO, "Send dtmf failed: context=" + event.getOperationContext());
+if (acsEvent instanceof SendDtmfTonesFailed) {
+ SendDtmfTonesFailed event = (SendDtmfTonesFailed) acsEvent;
+ log.info("Send dtmf failed: result=" + event.getResultInformation().getMessage() + ", context="
+ + event.getOperationContext());
+}
+```
+### [JavaScript](#tab/javascript)
+```javascript
+if (event.type === "Microsoft.Communication.SendDtmfTonesFailed") {
+ console.log("sendDtmfTones failed: result=%s, context=%s",
+ eventData.resultInformation.message,
+ eventData.operationContext);
} ```
+### [Python](#tab/python)
+```python
+if event.type == "Microsoft.Communication.SendDtmfTonesFailed":
+ app.logger.info("Send dtmf failed: result=%s, context=%s", event.data['resultInformation']['message'], event.data['operationContext'])
+```
-- ## Continuous DTMF Recognition
-You can subscribe to receive continuous DTMF tones throughout the call, your application receives DTMF tones as soon as the targeted participant presses on a key on their keypad. These tones will be sent to you one by one as the participant is pressing them.
+You can subscribe to receive continuous DTMF tones throughout the call. Your application receives DTMF tones as the targeted participant presses on a key on their keypad. These tones are sent to your application one by one as the participant is pressing them.
### StartContinuousDtmfRecognitionAsync Method Start detecting DTMF tones sent by a participant. ### [csharp](#tab/csharp) ```csharp
-await callAutomationClient.GetCallConnection(callConnectionId)
- .GetCallMedia()
- .StartContinuousDtmfRecognitionAsync(targetParticipant: new PhoneNumberIdentifier(c2Target), operationContext: "dtmf-reco-on-c2");
+await callAutomationClient.GetCallConnection(callConnectionId)
+ .GetCallMedia()
+ .StartContinuousDtmfRecognitionAsync(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2");
``` ### [Java](#tab/java) ```java
-callAutomationClient.getCallConnectionAsync(callConnectionId)
- .getCallMediaAsync()
- .startContinuousDtmfRecognitionWithResponse(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2").block();
+callAutomationClient.getCallConnectionAsync(callConnectionId)
+ .getCallMediaAsync()
+ .startContinuousDtmfRecognitionWithResponse(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2")
+ .block();
+```
+### [JavaScript](#tab/javascript)
+```javascript
+const continuousDtmfRecognitionOptions: ContinuousDtmfRecognitionOptions = {
+ operationContext: "dtmf-reco-on-c2"
+};
+
+await callAutomationclient.getCallConnection(callConnectionId)
+ .getCallMedia()
+ .startContinuousDtmfRecognition({
+ phoneNumber: c2Target
+ }, continuousDtmfRecognitionOptions);
+```
+### [Python](#tab/python)
+```python
+call_automation_client.get_call_connection(
+ call_connection_id
+).start_continuous_dtmf_recognition(
+ target_participant=PhoneNumberIdentifier(c2_target),
+ operation_context="dtmf-reco-on-c2",
+)
+app.logger.info("Started continuous DTMF recognition")
``` --
-When your application no longer wishes to receive DTMF tones from the participant anymore you can use the `StopContinuousDtmfRecognitionAsync` method to let ACS know to stop detecting DTMF tones.
+When your application no longer wishes to receive DTMF tones from the participant anymore, you can use the `StopContinuousDtmfRecognitionAsync` method to let ACS know to stop detecting DTMF tones.
### StopContinuousDtmfRecognitionAsync Stop detecting DTMF tones sent by participant. ### [csharp](#tab/csharp) ```csharp
-await callAutomationClient.GetCallConnection(callConnectionId)
- .GetCallMedia()
- .StopContinuousDtmfRecognitionAsync(targetParticipant: new PhoneNumberIdentifier(c2Target), operationContext: "dtmf-reco-on-c2");
+await callAutomationClient.GetCallConnection(callConnectionId)
+ .GetCallMedia()
+ .StopContinuousDtmfRecognitionAsync(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2");
``` ### [Java](#tab/java) ```java
-callAutomationClient.getCallConnectionAsync(callConnectionId)
- .getCallMediaAsync()
- .stopContinuousDtmfRecognitionWithResponse(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2").block();
+callAutomationClient.getCallConnectionAsync(callConnectionId)
+ .getCallMediaAsync()
+ .stopContinuousDtmfRecognitionWithResponse(new PhoneNumberIdentifier(c2Target), "dtmf-reco-on-c2")
+ .block();
+```
+### [JavaScript](#tab/javascript)
+```javascript
+const continuousDtmfRecognitionOptions: ContinuousDtmfRecognitionOptions = {
+ operationContext: "dtmf-reco-on-c2"
+};
+
+await callAutomationclient.getCallConnection(callConnectionId)
+ .getCallMedia()
+ .stopContinuousDtmfRecognition({
+ phoneNumber: c2Target
+ }, continuousDtmfRecognitionOptions);
+```
+### [Python](#tab/python)
+```python
+call_automation_client.get_call_connection(call_connection_id).stop_continuous_dtmf_recognition(
+ target_participant=PhoneNumberIdentifier(c2_target),
+ operation_context="dtmf-reco-on-c2")
+app.logger.info("Stopped continuous DTMF recognition")
``` --
Your application receives event updates when these actions either succeed or fai
Example of how you can handle a DTMF tone successfully detected. ### [csharp](#tab/csharp) ``` csharp
-if (@event is ContinuousDtmfRecognitionToneReceived toneReceived)
-{
- logger.LogInformation("Tone detected: sequenceId={sequenceId}, tone={tone}, context={context}",
- toneReceived.ToneInfo.SequenceId,
- toneReceived.ToneInfo.Tone,
- toneReceived.OperationContext);
-}
+if (acsEvent is ContinuousDtmfRecognitionToneReceived continuousDtmfRecognitionToneReceived)
+{
+ logger.LogInformation("Tone detected: sequenceId={sequenceId}, tone={tone}, context={context}",
+ continuousDtmfRecognitionToneReceived.ToneInfo.SequenceId,
+ continuousDtmfRecognitionToneReceived.ToneInfo.Tone,
+ continuousDtmfRecognitionToneReceived.OperationContext);
+}
``` ### [Java](#tab/java) ``` java
-if (acsEvent instanceof ContinuousDtmfRecognitionToneReceived) {
- ContinuousDtmfRecognitionToneReceived event = (ContinuousDtmfRecognitionToneReceived) acsEvent;
- logger.log(Level.INFO, "Tone detected: sequenceId=" + event.getToneInfo().getSequenceId()
-+ ", tone=" + event. getToneInfo().getTone()
-+ ", context=" + event.getOperationContext();
+if (acsEvent instanceof ContinuousDtmfRecognitionToneReceived) {
+ ContinuousDtmfRecognitionToneReceived event = (ContinuousDtmfRecognitionToneReceived) acsEvent;
+ log.info("Tone detected: sequenceId=" + event.getToneInfo().getSequenceId()
+ + ", tone=" + event.getToneInfo().getTone().convertToString()
+ + ", context=" + event.getOperationContext());
+}
+```
+### [JavaScript](#tab/javascript)
+```javascript
+if (event.type === "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived") {
+ console.log("Tone detected: sequenceId=%s, tone=%s, context=%s",
+ eventData.toneInfo.sequenceId,
+ eventData.toneInfo.tone,
+ eventData.operationContext);
+ } ```
+### [Python](#tab/python)
+```python
+if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived":
+ app.logger.info("Tone detected: sequenceId=%s, tone=%s, context=%s",
+ event.data['toneInfo']['sequenceId'],
+ event.data['toneInfo']['tone'],
+ event.data['operationContext'])
+```
-- ACS provides you with a `SequenceId` as part of the `ContinuousDtmfRecognitionToneReceived` event, which your application can use to reconstruct the order in which the participant entered the DTMF tones.
ACS provides you with a `SequenceId` as part of the `ContinuousDtmfRecognitionTo
Example of how you can handle when DTMF tone detection fails. ### [csharp](#tab/csharp) ``` csharp
-if (@event is ContinuousDtmfRecognitionToneFailed toneFailed)
-{
- logger.LogInformation("Tone detection failed: resultInfo={info}, context={context}",
- toneFailed.ResultInformation,
- toneFailed.OperationContext);
-}
+if (acsEvent is ContinuousDtmfRecognitionToneFailed continuousDtmfRecognitionToneFailed)
+{
+ logger.LogInformation("Start continuous DTMF recognition failed, result={result}, context={context}",
+ continuousDtmfRecognitionToneFailed.ResultInformation?.Message,
+ continuousDtmfRecognitionToneFailed.OperationContext);
+}
``` ### [Java](#tab/java) ``` java
-if (acsEvent instanceof ContinuousDtmfRecognitionToneFailed) {
- ContinuousDtmfRecognitionToneFailed event = (ContinuousDtmfRecognitionToneFailed) acsEvent;
- logger.log(Level.INFO, "Tone failed: context=" + event.getOperationContext());
+if (acsEvent instanceof ContinuousDtmfRecognitionToneFailed) {
+ ContinuousDtmfRecognitionToneFailed event = (ContinuousDtmfRecognitionToneFailed) acsEvent;
+ log.info("Tone failed: result="+ event.getResultInformation().getMessage()
+ + ", context=" + event.getOperationContext());
+}
+```
+### [JavaScript](#tab/javascript)
+```javascript
+if (event.type === "Microsoft.Communication.ContinuousDtmfRecognitionToneFailed") {
+ console.log("Tone failed: result=%s, context=%s", eventData.resultInformation.message, eventData.operationContext);
} ```
+### [Python](#tab/python)
+```python
+if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionToneFailed":
+ app.logger.info(
+ "Tone failed: result=%s, context=%s",
+ event.data["resultInformation"]["message"],
+ event.data["operationContext"],
+ )
+```
-- ### ContinuousDtmfRecogntionStopped Event Example of how to handle when continuous DTMF recognition has stopped, this could be because your application invoked the `StopContinuousDtmfRecognitionAsync` event or because the call has ended. ### [csharp](#tab/csharp) ``` csharp
-if (@event is ContinuousDtmfRecognitionStopped stopped)
-{
- logger.LogInformation("Tone detection stopped: context={context}",
- stopped.OperationContext);
-}
+if (acsEvent is ContinuousDtmfRecognitionStopped continuousDtmfRecognitionStopped)
+{
+ logger.LogInformation("Continuous DTMF recognition stopped, context={context}", continuousDtmfRecognitionStopped.OperationContext);
+}
``` ### [Java](#tab/java) ``` java
-if (acsEvent instanceof ContinuousDtmfRecognitionStopped) {
- ContinuousDtmfRecognitionStopped event = (ContinuousDtmfRecognitionStopped) acsEvent;
- logger.log(Level.INFO, "Tone failed: context=" + event.getOperationContext());
+if (acsEvent instanceof ContinuousDtmfRecognitionStopped) {
+ ContinuousDtmfRecognitionStopped event = (ContinuousDtmfRecognitionStopped) acsEvent;
+ log.info("Tone stopped, context=" + event.getOperationContext());
+}
+```
+### [JavaScript](#tab/javascript)
+```javascript
+if (event.type === "Microsoft.Communication.ContinuousDtmfRecognitionStopped") {
+ console.log("Tone stopped: context=%s", eventData.operationContext);
} ```
+### [Python](#tab/python)
+```python
+if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionStopped":
+ app.logger.info("Tone stoped: context=%s", event.data["operationContext"])
+```
--
communication-services Proxy Calling Support Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md
+zone_pivot_groups: acs-plat-web-ios-android-windows
# How to force calling traffic to be proxied across your own server
-In certain situations, it might be useful to have all your client traffic proxied to a server that you can control. When the SDK is initializing, you can provide the details of your servers that you would like the traffic to route to. Once enabled all the media traffic (audio/video/screen sharing) travel through the provided TURN servers instead of the Azure Communication Services defaults. This tutorial guides on how to have WebJS SDK calling traffic be proxied to servers that you control.
+In certain situations, it might be useful to have all your client traffic proxied to a server that you can control. When the SDK is initializing, you can provide the details of your servers that you would like the traffic to route to. Once enabled all the media traffic (audio/video/screen sharing) travel through the provided TURN servers instead of the Azure Communication Services defaults. This tutorial guides on how to have calling traffic be proxied to servers that you control.
->[!IMPORTANT]
-> The proxy feature is available starting in the public preview version [1.13.0-beta.4](https://www.npmjs.com/package/@azure/communication-calling/v/1.13.0-beta.4) of the Calling SDK. Please ensure that you use this or a newer SDK when trying to use this feature. This Quickstart uses the Azure Communication Services Calling SDK version greater than `1.13.0`.
-## Proxy calling media traffic
-
-## What is a TURN server?
-Many times, establishing a network connection between two peers isn't straightforward. A direct connection might not work because of many reasons: firewalls with strict rules, peers sitting behind a private network, or computers are running in a NAT environment. To solve these network connection issues, you can use a TURN server. The term stands for Traversal Using Relays around NAT, and it's a protocol for relaying network traffic STUN and TURN servers are the relay servers here. Learn more about how ACS [mitigates](../concepts/network-traversal.md) network challenges by utilizing STUN and TURN.
-
-### Provide your TURN servers details to the SDK
-To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web) for the Quickstart on how to setup Voice and Video.
-
-```js
-import { CallClient } from '@azure/communication-calling';
-
-const myTurn1 = {
- urls: [
- 'turn:turn.azure.com:3478?transport=udp',
- 'turn:turn1.azure.com:3478?transport=udp',
- ],
- username: 'turnserver1username',
- credential: 'turnserver1credentialorpass'
-};
-
-const myTurn2 = {
- urls: [
- 'turn:20.202.255.255:3478',
- 'turn:20.202.255.255:3478?transport=tcp',
- ],
- username: 'turnserver2username',
- credential: 'turnserver2credentialorpass'
-};
-
-// While you are creating an instance of the CallClient (the entry point of the SDK):
-const callClient = new CallClient({
- networkConfiguration: {
- turn: {
- iceServers: [
- myTurn1,
- myTurn2
- ]
- }
- }
-});
----
-// ...continue normally with your SDK setup and usage.
-```
-
-> [!IMPORTANT]
-> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates click here [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type).
-
-> [!NOTE]
-> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default behaviour will be UDP.
-
-> [!NOTE]
-> If any of the URLs provided are invalid or don't have one of these schemas - 'turn:', 'turns:', 'stun:', the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown should help you troubleshoot if you run into issues.
-
-The API reference for the `CallClientOptions` object, and the `networkConfiguration` property within it can be found here - [CallClientOptions](/javascript/api/azure-communication-services/@azure/communication-calling/callclientoptions?view=azure-communication-services-js&preserve-view=true).
-
-### Set up a TURN server in Azure
-You can create a Linux virtual machine in the Azure portal using this [guide](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu), and deploy a TURN server using [coturn](https://github.com/coturn/coturn), a free and open source implementation of a TURN and STUN server for VoIP and WebRTC.
-
-Once you have setup a TURN server, you can test it using the WebRTC Trickle ICE page - [Trickle ICE](https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/).
-
-## Proxy signaling traffic
-
-To provide the URL of a proxy server, you need to pass it in as part of `CallClientOptions` while initializing the `CallClient`. For more details how to setup a call see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video.
-
-```js
-import { CallClient } from '@azure/communication-calling';
-
-// While you are creating an instance of the CallClient (the entry point of the SDK):
-const callClient = new CallClient({
- networkConfiguration: {
- proxy: {
- url: 'https://myproxyserver.com'
- }
- }
-});
-
-// ...continue normally with your SDK setup and usage.
-```
-
-> [!NOTE]
-> If the proxy URL provided is an invalid URL, the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown will help you troubleshoot if you run into issues.
-
-The API reference for the `CallClientOptions` object, and the `networkConfiguration` property within it can be found here - [CallClientOptions](/javascript/api/azure-communication-services/@azure/communication-calling/callclientoptions?view=azure-communication-services-js&preserve-view=true).
-
-### Setting up a signaling proxy middleware in express js
-
-You can also create a proxy middleware in your express js server setup to have all the URLs redirected through it, using the [http-proxy-middleware](https://www.npmjs.com/package/http-proxy-middleware) npm package.
-The `createProxyMiddleware` function from that package should cover what you need for a simple redirect proxy setup. Here's an example usage of it with some option settings that the SDK needs to have all of our URLs working as expected:
-
-```js
-const proxyRouter = (req) => {
- // Your router function if you don't intend to setup a direct target
-
- // An example:
- if (!req.originalUrl && !req.url) {
- return '';
- }
-
- const incomingUrl = req.originalUrl || req.url;
- if (incomingUrl.includes('/proxy')) {
- return 'https://microsoft.com/forwarder/';
- }
-
- return incomingUrl;
-}
-
-const myProxyMiddleware = createProxyMiddleware({
- target: 'https://microsoft.com', // This will be ignore if a router function is provided, but createProxyMiddleware still requires this to be passed in (see it's official docs on the npm page for the most recent changes)
- router: proxyRouter,
- changeOrigin: true,
- secure: false, // If you have proper SSL setup, set this accordingly
- followRedirects: true,
- ignorePath: true,
- ws: true,
- logLevel: 'debug'
-});
-
-// And finally pass in your proxy middleware to your express app depending on your URL/host setup
-app.use('/proxy', myProxyMiddleware);
-```
-
-> [!Tip]
-> If you are having SSL issues, check out the [cors](https://www.npmjs.com/package/cors) package.
-
-### Setting up a signaling proxy server on Azure
-You can create a Linux virtual machine in the Azure portal and deploy an NGINX server on it using this guide - [Quickstart: Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu).
-
-Here's an NGINX config that you could make use of for a quick spin up:
-```
-events {
- multi_accept on;
- worker_connections 65535;
-}
-http {
- map $http_upgrade $connection_upgrade {
- default upgrade;
- '' close;
- }
- map $request_method $access_control_header {
- OPTIONS '*';
- }
- server {
- listen <port_you_want_listen_on> ssl;
- ssl_certificate <path_to_your_ssl_cert>;
- ssl_certificate_key <path_to_your_ssl_key>;
- location ~* ^/(.*?\.(com|net)(?::[\d]+)?)/(.*)$ {
- if ($request_method = 'OPTIONS') {
- add_header Access-Control-Allow-Origin '*' always;
- add_header Access-Control-Allow-Credentials 'true' always;
- add_header Access-Control-Allow-Headers '*' always;
- add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
- add_header Access-Control-Max-Age 1728000;
- add_header Content-Type 'text/plain';
- add_header Content-Length 0;
- return 204;
- }
- resolver 1.1.1.1;
- set $ups_host $1;
- set $r_uri $3;
- rewrite ^/.*$ /$r_uri break;
- proxy_set_header Host $ups_host;
- proxy_ssl_server_name on;
- proxy_ssl_protocols TLSv1.2;
- proxy_ssl_ciphers DEFAULT;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_pass_header Authorization;
- proxy_pass_request_headers on;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection $connection_upgrade;
- proxy_set_header Proxy "";
- proxy_set_header Access-Control-Allow-Origin $access_control_header;
- proxy_pass https://$ups_host;
- proxy_redirect https://$ups_host https://$host/$ups_host;
- proxy_intercept_errors on;
- error_page 301 302 307 = @process_redirect;
- error_page 400 405 = @process_error_response;
- }
- location @process_redirect {
- set $saved_redirect_location '$upstream_http_location';
- resolver 1.1.1.1;
- proxy_pass $saved_redirect_location;
- add_header X-DBUG-MSG "301" always;
- }
- location @process_error_response {
- add_header Access-Control-Allow-Origin * always;
- }
- }
-}
-```
communications-gateway Prepare For Live Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md
In this article, you learn about the steps you and your onboarding team must tak
- You must have [deployed Azure Communications Gateway](deploy.md) using the Microsoft Azure portal. - You must have [chosen some test numbers](prepare-to-deploy.md#prerequisites). - You must have a tenant you can use for testing (representing an enterprise customer), and some users in that tenant to whom you can assign the test numbers.-- You must have access to the:
- - [Operator Connect portal](https://operatorconnect.microsoft.com/).
- - [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant.
-- You must be able to manage users in your test tenant.
+ - If you do not already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses.
+ - The test users must be licensed for Teams Phone System and in Teams Only mode.
+- You must have access to the following configuration portals.
+
+ |Configuration portal |Required permissions |
+ |||
+ |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you prepared to deploy Azure Communications Gateway](prepare-to-deploy.md#1-add-the-project-synergy-application-to-your-azure-tenancy))|
+ |[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management|
+ ## Methods
In some parts of this article, the steps you must take depend on whether your de
1. Azure Communications Gateway is preconfigured to support the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, provide your onboarding team with this root CA certificate. 1. The root CA certificate for Azure Communications Gateway's certificate is the DigiCert Global Root G2 certificate. If your network doesn't have this root certificate, download it from https://www.digicert.com/kb/digicert-root-certificates.htm and install it in your network. 1. Configure your infrastructure to meet the call routing requirements described in [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
-1. Configure your network devices to send and receive SIP traffic from Azure Communications Gateway. You might need to configure SBCs, softswitches and access control lists (ACLs). To find the hostnames to use for SIP traffic:
- 1. Go to the **Overview** page for your Azure Communications Gateway resource.
- 1. In each **Service Location** section, find the **Hostname** field. You need to validate TLS connections against this hostname to ensure secure connections.
+1. Configure your network devices to send and receive SIP traffic from Azure Communications Gateway.
+ * Depending on your network, you might need to configure SBCs, softswitches and access control lists (ACLs).
+ * Your network needs to send SIP traffic to per-region FQDNs for Azure Communications Gateway. To find these FQDNs:
+ 1. Go to the **Overview** page for your Azure Communications Gateway resource.
+ 1. In each **Service Location** section, find the **Hostname** field. You need to validate TLS connections against this hostname to ensure secure connections.
+ * We recommend configuring an SRV lookup for each region, using `_sip._tls.<regional-FQDN-from-portal>`. Replace *`<regional-FQDN-from-portal>`* with the per-region FQDNs that you found in the **Overview** page for your resource.
1. If your Azure Communications Gateway includes integrated MCP, configure the connection to MCP: 1. Go to the **Overview** page for your Azure Communications Gateway resource. 1. In each **Service Location** section, find the **MCP hostname** field.
In some parts of this article, the steps you must take depend on whether your de
Your onboarding team must register the test enterprise tenant that you chose in [Prerequisites](#prerequisites) with Microsoft Teams.
+1. Find your company's "Operator ID" in your [operator configuration in the Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
1. Provide your onboarding contact with: - Your company's name.
- - Your company's ID ("Operator ID").
+ - Your company's Operator ID.
- The ID of the tenant to use for testing.
-2. Wait for your onboarding team to confirm that your test tenant has been registered.
+1. Wait for your onboarding team to confirm that your test tenant has been registered.
## 3. Assign numbers to test users in your tenant
Your onboarding team must register the test enterprise tenant that you chose in
1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant. 1. Select **Voice** > **Operators**. 1. Select your company in the list of operators, fill in the form and select **Add as my operator**.
-1. In your test tenant, create some test users (if you don't already have suitable users). These users must be licensed for Teams Phone System and in Teams Only mode.
+1. In your test tenant, create some test users (if you don't already have suitable users). License the users for Teams Phone System and place them in Teams Only mode.
1. Configure emergency locations in your test tenant. 1. Upload numbers in the Number Management Portal (if you chose to deploy it as part of Azure Communications Gateway) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team.
communications-gateway Provision User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md
Your staff might need different user roles, depending on the tasks they need to
| Deploying Azure Communications Gateway |**Contributor** access to your subscription| | Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level| |Monitoring logs and metrics | **Reader** access to your subscription|
-|Using the Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** permissions to the Azure portal for your subscription|
+|Using the Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for the Project Synergy enterprise application and **Reader** access to the Azure portal for your subscription|
## 2. Configure user roles
You need to use the Azure portal to configure user roles.
1. Read through [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) and ensure that you: - Know who needs access. - Know the appropriate user role or roles to assign them.
- - Are signed in with a user that is assigned a role that has role assignments write permission, such as **Owner** or **User Access Administrator** for the subscription.
-1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user that can change permissions for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+ - Are signed in with a user account with a role that can change role assignments for the subscription, such as **Owner** or **User Access Administrator**.
+1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user that can change roles for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
### 2.2 Assign a user role 1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [1. Understand the user roles required for Azure Communications Gateway](#1-understand-the-user-roles-required-for-azure-communications-gateway).
-1. If you're managing access to the Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for each user in the Project Synergy application.
+1. If you're managing access to the Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for each user in the Project Synergy application.
## Next steps
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Azure Container Apps deployments are powered by an Azure Resource Manager (ARM)
The latest management API versions for Azure Container Apps are: -- [`2022-10-01`](/rest/api/containerapps/stable/container-apps) (stable)
+- [`2023-05-01`](/rest/api/containerapps/stable/container-apps) (stable)
- [`2023-04-01-preview`](/rest/api/containerapps/preview/container-apps) (preview) To learn more about the differences between API versions, see [Microsoft.App change log](/azure/templates/microsoft.app/change-log/summary).
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Title: Disaster recovery guidance for Azure Container Apps
+ Title: Reliability in Azure Container Apps
description: Learn how to plan for and recover from disaster recovery scenarios in Azure Container Apps
Previously updated : 1/18/2023 Last updated : 08/10/2023
-# Disaster recovery guidance for Azure Container Apps
+# Reliability in Azure Container Apps
Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
There are two environments in Container Apps: the Consumption only environment s
| Environment Type | Description | |--|-|
-| Workload profiles environment (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. <br /> <br /> As workload profiles are currently in preview, the number of supported regions is limited. To learn more, visit the [workload profiles overview](./workload-profiles-overview.md#supported-regions).|
+| Workload profiles environment (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. <br /> <br /> As workload profiles are currently in preview, the number of supported regions is limited. To learn more, visit the [workload profiles overview](./workload-profiles-overview.md).|
| Consumption only environment | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /23. | ## Accessibility Levels
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
The format of a revision name is:
By default, Container Apps creates a unique revision name with a suffix consisting of a semi-random string of alphanumeric characters. You can customize the name by setting a unique custom revision suffix.
-For example, for a container app named *album-api*, setting the revision suffix name to *first-revision* would create a revision with the name *album-api--first-revision*.
+For example, for a container app named *album-api*, setting the revision suffix name to *first-revision* would create a revision with the name *album-api-first-revision*.
A revision suffix name must:
container-apps Workload Profiles Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-overview.md
For each Dedicated workload profile in your environment, you can:
You can configure each of your apps to run on any of the workload profiles defined in your Container Apps environment. This configuration is ideal for deploying a microservice solution where each app can run on the appropriate compute infrastructure.
-## Supported regions
-
-All regions are supported except for the following regions that are not supported during preview:
--- West US 2-- Central US-- UAE North-- Germany West Central- ## Profile types There are different types and sizes of workload profiles available by region. By default each Consumption + Dedicated plan structure includes a Consumption profile, but you can also add any of the following profiles:
container-instances Container Instances Container Group Automatic Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-group-automatic-ssl.md
read R BLOCK
#### Chrome browser
-Navigate to https://my-app.westeurope.azurecontainer.io and verify the certificate by clicking on the padlock next to the URL.
+Navigate to ``` https://my-app.westeurope.azurecontainer.io ``` and verify the certificate by clicking on the padlock next to the URL.
:::image type="content" source="media/container-instances-container-group-automatic-ssl/url-padlock.png" alt-text="Screenshot highlighting the padlock next to the URL that verifies the certificate.":::
container-registry Container Registry Get Started Docker Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-docker-cli.md
Both commands return `Login Succeeded` once completed.
## Pull a public Nginx image
-First, pull a public Nginx image to your local computer. This example pulls an image from Microsoft Container Registry.
+First, pull a public Nginx image to your local computer. This example pulls the [official Nginx image](https://hub.docker.com/_/nginx/).
```
-docker pull mcr.microsoft.com/oss/nginx/nginx:stable
+docker pull nginx
``` ## Run the container locally
docker pull mcr.microsoft.com/oss/nginx/nginx:stable
Execute the following [docker run](https://docs.docker.com/engine/reference/run/) command to start a local instance of the Nginx container interactively (`-it`) on port 8080. The `--rm` argument specifies that the container should be removed when you stop it. ```
-docker run -it --rm -p 8080:80 mcr.microsoft.com/oss/nginx/nginx:stable
+docker run -it --rm -p 8080:80 nginx
``` Browse to `http://localhost:8080` to view the default web page served by Nginx in the running container. You should see a page similar to the following:
To stop and remove the container, press `Control`+`C`.
Use [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) to create an alias of the image with the fully qualified path to your registry. This example specifies the `samples` namespace to avoid clutter in the root of the registry. ```
-docker tag mcr.microsoft.com/oss/nginx/nginx:stable myregistry.azurecr.io/samples/nginx
+docker tag nginx myregistery.azurecr.io/samples/nginx
``` For more information about tagging with namespaces, see the [Repository namespaces](container-registry-best-practices.md#repository-namespaces) section of [Best practices for Azure Container Registry](container-registry-best-practices.md).
container-registry Container Registry Oci Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oci-artifacts.md
To remove the artifact from your registry, use the `oras manifest delete` comman
<!-- LINKS - external --> [iana-mediatypes]: https://www.rfc-editor.org/rfc/rfc6838
-[oras-install-docs]: https://oras.land/docs/category/cli
-[oras-cli]: https://oras.land/docs/category/cli-reference
-[oras-push-multifiles]: https://oras.land/docs/cli/pushing/#pushing-artifacts-with-multiple-files
+[oras-install-docs]: https://oras.land/docs/installation
+[oras-cli]: https://oras.land/blog/oras-0.15-a-fully-functional-registry-client/
+[oras-push-multifiles]: https://oras.land/docs/how_to_guides/pushing_and_pulling/#pushing-artifacts-with-multiple-files
<!-- LINKS - internal --> [acr-landing]: https://aka.ms/acr [acr-authentication]: ./container-registry-authentication.md?tabs=azure-cli [az-acr-create]: ./container-registry-get-started-azure-cli.md [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
-[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-cli-install]: /cli/azure/install-azure-cli
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation verify $IMAGE ``` Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message.-
-## Next steps
-
-See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/examples/ratify-verify-azure-cmd.md).
container-registry Quickstart Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-client-libraries.md
for (String repositoryName : client.listRepositoryNames()) {
### Currently supported environments -- [LTS versions of Node.js](https://nodejs.org/about/releases/)
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
See our [support policy](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md) for more details.
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Read consistency applies to a single read operation scoped within a logical part
## Configure the default consistency level
-You can configure the default consistency level on your Azure Cosmos DB account at any time. The default consistency level configured on your account applies to all Azure Cosmos DB databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default. To learn more, see how to [configure the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level). You can also override the default consistency level for a specific request, to learn more, see how to [Override the default consistency level](how-to-manage-consistency.md?#override-the-default-consistency-level) article.
-
+You can configure the default consistency level on your Azure Cosmos DB account at any time. The default consistency level configured on your account applies to all Azure Cosmos DB databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default. As you change your account level consistency, ensure you redeploy your applications and make any necessary code modifications to apply these changes. To learn more, see how to [configure the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level). You can also override the default consistency level for a specific request, to learn more, see how to [Override the default consistency level](how-to-manage-consistency.md?#override-the-default-consistency-level) article.
> [!TIP] > Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. For more information, see [Consistency levels and throughput](consistency-levels.md#consistency-levels-and-throughput).
After every write operation, the client receives an updated Session Token from t
> [!IMPORTANT] > In Session Consistency, the clientΓÇÖs usage of a session token guarantees that data corresponding to an older session will never be read. However, if the client is using an older session token and more recent updates have been made to the database, the more recent version of the data will be returned despite an older session token being used. The Session Token is used as a minimum version barrier but not as a specific (possibly historical) version of the data to be retrieved from the database.
+Session Tokens in Azure Cosmos DB are partition-bound, meaning they are exclusively associated with one partition. In order to ensure you can read your writes, use the session token that was last generated for the relevant item(s).
+ If the client didn't initiate a write to a physical partition, the client doesn't contain a session token in its cache and reads to that physical partition behave as reads with Eventual Consistency. Similarly, if the client is re-created, its cache of session tokens is also re-created. Here too, read operations follow the same behavior as Eventual Consistency until subsequent write operations rebuild the clientΓÇÖs cache of session tokens. > [!IMPORTANT]
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently, you can restore an Azure Cosmos DB account (API for NoSQL or MongoDB,
By default, Azure Cosmos DB stores continuous mode backup data in locally redundant storage blobs. For the regions that have zone redundancy configured, the backup is stored in zone-redundant storage blobs. In continuous backup mode, you can't update the backup storage redundancy. ## Different ways to restore
-Continuous backup mode supports two ways to restore deleted containers, databases. Existing restore mechanism restores into a [new account](restore-account-continuous-backup.md) as documented here. Restore into existing account is described [here](restore-account-continuous-backup.md). The choice between two depends on the scenarios and impact. Most of the deleted containers, databases can prefer in-account (existing) account restore to prevent data transfer which is required in case you restored to a new account. For scenarios where you have modified the data accidentally restore into new account is the right thing to do.
+Continuous backup mode supports two ways to restore deleted containers and databases. They can be restored into a [new account](restore-account-continuous-backup.md) as documented here or can be restored into an existing account as described [here](restore-account-continuous-backup.md). The choice between these two depends on the scenarios and impact. In most cases it is preferred to restore deleted containers and databases into an existing account to prevent the cost of data transfer which is required in the case they are restored to a new account. For scenarios where you have modified the data accidentally restore into new account could be the prefered option.
## What is restored into a new account?
cosmos-db Change Feed Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-modes.md
Previously updated : 05/09/2023 Last updated : 08/14/2023 # Change feed modes in Azure Cosmos DB [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-Azure Cosmos DB offers two change feed modes. Each mode offers the same core functionality. Differences include the operations that are captured in the feed, the metadata that's available for each change, and the retention period of changes. You can consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container to fit the requirements of each workload.
+Azure Cosmos DB offers two change feed modes. Each mode offers the same core functionality. Differences include the operations that are captured in the feed, the metadata that's available for each change, and the retention period of changes. You can consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container to fit the requirements of each workload. Each individual change feed application can only be configured to read change feed in one mode. Consuming the change feed in one mode doesn't prohibit you from consuming the change feed in another mode in a different application.
> [!NOTE] > Do you have any feedback about change feed modes? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: [cosmoschangefeed@microsoft.com](mailto:cosmoschangefeed@microsoft.com).
The response object is an array of items that represent each change. The array l
* Receiving the previous version of items that have been updated isn't currently available.
-* Accounts that use [private endpoints](../how-to-configure-private-endpoints.md) aren't supported.
- * Accounts that have enabled [merging partitions](../merge.md) aren't supported.
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-consistency.md
This article explains how to manage consistency levels in Azure Cosmos DB. You learn how to configure the default consistency level, override the default consistency, manually manage session tokens, and understand the Probabilistically Bounded Staleness (PBS) metric.
+As you change your account level consistency, ensure you redeploy your applications and make any necessary code modifications to apply these changes.
+ [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ## Configure the default consistency level
In some scenarios you need to manage this Session yourself. Consider a web appli
If you do not flow the Azure Cosmos DB SessionToken across as described above you could end up with inconsistent read results for a period of time.
-To manage session tokens manually, get the session token from the response and set them per request. If you don't need to manage session tokens manually, you don't need to use these samples. The SDK keeps track of session tokens automatically. If you don't set the session token manually, by default, the SDK uses the most recent session token.
+Session Tokens in Azure Cosmos DB are partition-bound, meaning they are exclusively associated with one partition. In order to ensure you can read your writes, use the session token that was last generated for the relevant item(s). To manage session tokens manually, get the session token from the response and set them per request. If you don't need to manage session tokens manually, you don't need to use these samples. The SDK keeps track of session tokens automatically. If you don't set the session token manually, by default, the SDK uses the most recent session token.
### <a id="utilize-session-tokens-dotnet"></a>.NET SDK
cosmos-db Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v4.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 4.48.0 and above.
+It's strongly recommended to use version 4.48.1 and above.
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cosmos-db Sdk Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-nodejs.md
| Samples | [Node.js code samples](samples-nodejs.md) | Getting started tutorial | [Get started with the JavaScript SDK](sql-api-nodejs-get-started.md) | Web app tutorial | [Build a Node.js web application using Azure Cosmos DB](tutorial-nodejs-web-app.md)
-| Current supported Node.js platforms | [LTS versions of Node.js](https://nodejs.org/about/releases/)
+| Current supported Node.js platforms | [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
## Release notes
for await(const { result: item } in client.databases.readAll().getAsyncIterator(
### Fixed containers are now partitioned
-The Azure Cosmos DB service now supports partition keys on all containers, including those that were previously created as fixed containers. The v3 SDK updates to the latest API version that implements this change, but it is not breaking. If you do not supply a partition key for operations, we will default to a system key that works with all your existing containers and documents.
+The Azure Cosmos DB service now supports partition keys on all containers, including those that were previously created as fixed containers. The v3 SDK updates to the latest API version that implements this change, but it isn't breaking. If you don't supply a partition key for operations, we'll default to a system key that works with all your existing containers and documents.
### Upsert removed for stored procedures Previously upsert was allowed for non-partitioned collections, but with the API version update, all collections are partitioned so we removed it entirely.
-### Item reads will not throw on 404
+### Item reads won't throw on 404
const container = client.database(dbId).container(containerId)
v2 had custom code to generate item IDs. We have switched to the well known and
#### Connection strings
-It is now possible to pass a connection string copied from the Azure portal:
+It's now possible to pass a connection string copied from the Azure portal:
```javascript const client = new CosmosClient("AccountEndpoint=https://test-account.documents.azure.com:443/;AccountKey=c213asdasdefgdfgrtweaYPpgoeCsHbpRTHhxuMsTaw==;")
Add DISTINCT and LIMIT/OFFSET queries (#306)
### Improved browser experience
-While it was possible to use the v2 SDK in the browser, it was not an ideal experience. You needed to Polyfill several Node.js built-in libraries and use a bundler like webpack or Parcel. The v3 SDK makes the out of the box experience much better for browser users.
+While it was possible to use the v2 SDK in the browser, it wasn't an ideal experience. You needed to Polyfill several Node.js built-in libraries and use a bundler like webpack or Parcel. The v3 SDK makes the out of the box experience much better for browser users.
* Replace request internals with fetch (#245) * Remove usage of Buffer (#330)
Not always the most visible changes, but they help our team ship better code, fa
## Release & Retirement Dates
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible. Read the [Microsoft Support Policy for SDKs](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md#microsoft-support-policy) for more details.
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it's recommended that you always upgrade to the latest SDK version as early as possible. Read the [Microsoft Support Policy for SDKs](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md#microsoft-support-policy) for more details.
| Version | Release Date | Retirement Date | | | | |
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 08/01/2023 Last updated : 08/09/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### August 2023
+* General availability: Citus 12 is now available in [all supported regions](./resources-regions.md) with PostgreSQL 14 and PostgreSQL 15.
+ * Check [what's new in Citus 12](https://www.citusdata.com/updates/v12-0/).
+ * See [Postgres and Citus version in-place upgrade](./concepts-upgrade.md).
* Preview: [Azure Active Directory (Azure AD) authentication](./concepts-authentication.md#azure-active-directory-authentication-preview) is now supported in addition to Postgres roles. * Preview: Azure CLI is now supported for all Azure Cosmos DB for PostgreSQL management operations. * See [details](/cli/azure/cosmosdb/postgres).
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Title: Extensions ΓÇô Azure Cosmos DB for PostgreSQL description: Describes the ability to extend the functionality of your database by using extensions in Azure Cosmos DB for PostgreSQL--++ Previously updated : 02/25/2023 Last updated : 08/09/2023 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.12 | 10.2.9 | 11.3.0 | 11.3.0 | 11.3.0 |
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.12 | 10.2.9 | 11.3.0 | 12.0.0 | 12.0.0 |
### Data types extensions
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 05/15/2023 Last updated : 08/09/2023 # Supported database versions in Azure Cosmos DB for PostgreSQL
PostgreSQL database version:
Depending on which version of PostgreSQL is running in a cluster, different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, PostgreSQL 13, PostgreSQL 14, and PostgreSQL 15 come with Citus 11, PostgreSQL 12 comes with
-Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
+will be installed as well. In particular, PostgreSQL 14 and PostgreSQL 15 come with Citus 12, PostgreSQL 13 comes with Citus 11, PostgreSQL 12 comes with Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
## Next steps
cosmos-db Social Media Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md
Now that I got you hooked, youΓÇÖll probably think you need some PhD in math sci
To achieve any of these Machine Learning scenarios, you can use [Azure Data Lake](https://azure.microsoft.com/services/data-lake-store/) to ingest the information from different sources. You can also use [U-SQL](https://azure.microsoft.com/documentation/videos/data-lake-u-sql-query-execution/) to process the information and generate an output that can be processed by Azure Machine Learning.
-Another available option is to use [Azure AI services](https://www.microsoft.com/cognitive-services) to analyze your users content; not only can you understand them better (through analyzing what they write with [Text Analytics API](https://www.microsoft.com/cognitive-services/en-us/text-analytics-api)), but you could also detect unwanted or mature content and act accordingly with [Computer Vision API](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api). Azure AI services includes many out-of-the-box solutions that don't require any kind of Machine Learning knowledge to use.
+Another available option is to use [Azure AI services](https://www.microsoft.com/cognitive-services) to analyze your users content; not only can you understand them better (through analyzing what they write with [Text Analytics API](https://www.microsoft.com/cognitive-services/en-us/text-analytics-api)), but you could also detect unwanted or mature content and act accordingly with [Computer Vision API](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/). Azure AI services includes many out-of-the-box solutions that don't require any kind of Machine Learning knowledge to use.
## A planet-scale social experience
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
keywords:
Previously updated : 10/20/2022 Last updated : 08/07/2023
How you organize and allocate costs plays a huge role in how people within your
Cost Management and Billing offer many different types of emails and alerts to keep you informed and help you proactively manage your account and incurred costs. - [**Budget alerts**](./costs/tutorial-acm-create-budgets.md) notify recipients when cost exceeds a predefined cost or forecast amount. Budgets can be visualized in cost analysis and are available on every scope supported by Cost Management. Subscription and resource group budgets can also be configured to notify an action group to take automated actions to reduce or even stop further charges.-- [**Anomaly alerts**](./understand/analyze-unexpected-charges.md)notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis preview. Anomaly alerts can be configured from the cost alerts page.
+- [**Anomaly alerts**](./understand/analyze-unexpected-charges.md) notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis smart view. Anomaly alerts can be configured from the cost alerts page.
- [**Scheduled alerts**](./costs/save-share-views.md#subscribe-to-scheduled-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV. - **EA commitment balance alerts** are automatically sent to any notification contacts configured on the EA billing account when the balance is 90% or 100% used. - **Invoice alerts** can be configured for MCA billing profiles and Microsoft Online Services Program (MOSP) subscriptions. For details, see [View and download your Azure invoice](./understand/download-azure-invoice.md).
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
Title: Allocate Azure costs
description: This article explains how create cost allocation rules to distribute costs of subscriptions, resource groups, or tags to others. Previously updated : 03/28/2023 Last updated : 08/07/2023
-# Create and manage Azure cost allocation rules (Preview)
+# Create and manage Azure cost allocation rules
Large enterprises often centrally manage Azure services or resources. However, different internal departments or business units use them. Typically, the centrally managing team wants to reallocate the cost of the shared services back out to the internal departments or organizational business units who are actively using the services. This article helps you understand and use cost allocation in Cost Management.
Cost allocation doesn't affect your billing invoice. Billing responsibilities do
Allocated costs appear in cost analysis. They appear as other items associated with the targeted subscriptions, resource groups, or tags that you specify when you create a cost allocation rule.
-> [!NOTE]
-> Cost Management's cost allocation feature is currently in public preview. Some features in Cost Management might not be supported or might have limited capabilities.
- ## Prerequisites - Cost allocation currently only supports customers with:
Allocated costs appear in cost analysis. They appear as other items associated w
1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com/). 2. Navigate to **Cost Management + Billing** > **Cost Management**.
-3. Under **Settings** > **Configuration**, select **Cost allocation (Preview)**.
+3. Under **Settings** > **Configuration**, select **Cost allocation**.
4. Ensure that you select the correct EA enrollment or billing account. 5. Select **+Add**. 6. Enter descriptive text for the cost allocation rule name.
You can edit a cost allocation rule to change the source or the target or if you
Currently, Cost Management supports cost allocation in Cost analysis, budgets, and forecast views. Allocated costs appear in the subscriptions list and on the Subscriptions overview page.
-The following items are currently unsupported by the cost allocation public preview:
+The following items are currently unsupported by cost allocation:
- Billing subscriptions area - [Cost Management Power BI App](https://appsource.microsoft.com/product/power-bi/costmanagement.azurecostmanagementapp)
cost-management-billing Cost Analysis Built In Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-built-in-views.md
description: This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further. Previously updated : 09/09/2022 Last updated : 08/10/2023
Cost Management includes several tools to help you view and monitor your cloud costs. As you get started, cost analysis is the first one you should familiarize yourself with. And within cost analysis, you'll start with built-in views. This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further.
-## Access built-in views
-
-When you're in classic Cost analysis, you can access the preview views at the top of the page with the **Cost by resource** list.
-
+<a name="Resources"></a>
+<a name="CostByResource"></a>
## Analyze resource costs Cost Management offers two views to analyze your resource costs: -- **Cost by resource**-- **Resources (preview)**
+- **Cost by resource** (customizable view)
+- **Resources** (smart view)
Both views are only available when you have a subscription or resource group scope selected.
-The classic **Cost by resource** view shows a list of all resources. Information is shown in tabular format.
+The **Cost by resource** customizable view shows a list of all resources. Information is shown in tabular format.
:::image type="content" source="./media/cost-analysis-built-in-views/cost-by-resource.png" alt-text="Screenshot showing an example of the Cost by resource view." lightbox="./media/cost-analysis-built-in-views/cost-by-resource.png" :::
-The preview **Resources** view shows a list of all resources, including deleted resources. The view is like the Cost by resource view in classic cost analysis. Compared to the classic Cost by resource view, the new view:
+The **Resources** smart view shows a list of all resources, including deleted resources. The view is like the Cost by resource view with the following improvements:
-- Has optimized performance and loads resources faster. It better groups together related costs. Azure and Marketplace costs are grouped together.-- Provides improved troubleshooting details.-- Shows grouped Azure and Marketplace costs together per resource.-- Shows resource types with icons.
+- Optimized performance that loads resources faster.
+- Provides smart insights to help you better understand your data, like subscription cost anomalies.
- Includes a simpler custom date range selection with support for relative date ranges. - Allows you to customize the download to exclude nested details. For example, resources without meters in the Resources view.-- Provides smart insights to help you better understand your data, like subscription cost anomalies.
+- Groups Azure and Marketplace costs for a single resource together on a single row.
+- Groups related resources together based on the resource hierarchy in Azure Resource Manager.
+- Groups related resources under their logical parent using the `cm-resource-parent` tag (set the value to the parent resource ID).
+- Shows resource types with icons.
+- Provides improved troubleshooting details to streamline support.
Use either view to:
Use either view to:
:::image type="content" source="./media/cost-analysis-built-in-views/resources.png" alt-text="Screenshot showing an example of the Resources view." lightbox="./media/cost-analysis-built-in-views/resources.png" :::
+<a name="ResourceGroups"></a>
+ ## Analyze resource group costs The **Resource groups** view separates each resource group in your subscription, management group, or billing account showing nested resources.
Use this view to:
:::image type="content" source="./media/cost-analysis-built-in-views/resource-groups.png" alt-text="Screenshot showing an example of the Resource groups view." lightbox="./media/cost-analysis-built-in-views/resource-groups.png" :::
+<a name="Subscriptions"></a>
+ ## Analyze your subscription costs The **Subscriptions** view is only available when you have a billing account or management group scope selected. The view separates costs by subscription and resource group.
Use this view to:
:::image type="content" source="./media/cost-analysis-built-in-views/subscriptions.png" alt-text="Screenshot showing an example of the Subscriptions view." lightbox="./media/cost-analysis-built-in-views/subscriptions.png" :::
+<a name="Customers"></a>
+
+## Review cost across CSP end customers
+
+The **Customers** view is available for CSP partners when you have a billing account or billing profile scope selected. The view separates costs by customer and subscription.
+
+Use this view to:
+
+- Identify the customers that are incurring the most cost.
+- Identify the subscriptions that are incurring the most cost for a specific customer.
+
+<a name="Reservations"></a>
+ ## Review reservation resource utilization The **Reservations** view provides a breakdown of amortized reservation costs, allowing you to see which resources are consuming each reservation.
Because of the change in how costs are represented, it's important to note that
:::image type="content" source="./media/cost-analysis-built-in-views/reservations.png" alt-text="Screenshot showing an example of the Reservations view." lightbox="./media/cost-analysis-built-in-views/reservations.png" :::
+<a name="Services"></a>
+ ## Break down product and service costs
-The **Services view** shows a list of your services and products. This view is like the Invoice details view in classic cost analysis. The main difference is that rows are grouped by service, making it simpler to see your total cost at a service level. It also separates individual products you're using in each service.
+The **Services view** shows a list of your services and products. This view is like the Invoice details customizable view. The main difference is that rows are grouped by service, making it simpler to see your total cost at a service level. It also separates individual products you're using in each service.
Use this view to:
Use this view to:
:::image type="content" source="./media/cost-analysis-built-in-views/services.png" alt-text="Screenshot showing an example of the Services view." lightbox="./media/cost-analysis-built-in-views/services.png" :::
+<a name="AccumulatedCosts"></a>
+ ## Review current cost trends Use the **Accumulated costs** view to:
Use the **Accumulated costs** view to:
:::image type="content" source="./media/cost-analysis-built-in-views/accumulated-costs.png" alt-text="Screenshot showing an example of the Accumulated Costs view." lightbox="./media/cost-analysis-built-in-views/accumulated-costs.png" :::
+<a name="CostByService"></a>
+ ## Compare monthly service run rate costs Use the **Cost by service** view to:
Use the **Cost by service** view to:
:::image type="content" source="./media/cost-analysis-built-in-views/cost-by-service.png" alt-text="Screenshot showing an example of the Cost by service view." lightbox="./media/cost-analysis-built-in-views/cost-by-service.png" :::
+<a name="InvoiceDetails"></a>
+ ## Reconcile invoiced usage charges Use the **Invoice details** view to:
Use the **Invoice details** view to:
## Next steps - Now that you're familiar with using built-in views, read about [Saving and sharing customized views](save-share-views.md).-- Learn about how to [Customize views in cost analysis](customize-cost-analysis-views.md)
+- Learn about how to [Customize views in cost analysis](customize-cost-analysis-views.md)
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in. Previously updated : 05/25/2023 Last updated : 08/07/2023
It's the same experience as the public portal, except with new improvements and
We encourage you to try out the preview features available in Cost Management Labs and share your feedback. It's your chance to influence the future direction of Cost Management. To provide feedback, use the **Report a bug** link in the Try preview menu. It's a direct way to communicate with the Cost Management engineering team. - <a name="rememberpreviews"></a> ## Remember preview features across sessions Cost Management now remembers preview features across sessions in the preview portal. Select the preview features you're interested in from the **Try preview** menu and you'll see them enabled by default the next time you visit the portal. There's no need to enable the option ΓÇô preview features are remembered automatically. -
-<a name="totalkpitooltip"></a>
-
-## Total KPI tooltip
-
-View more details about what costs are included and not included in the Cost analysis preview. You can enable this option from the Try Preview menu.
-
-The Total KPI tooltip can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) menu in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
-- <a name="customersview"></a> Cloud Solution Provider (CSP) partners can view a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for Microsoft Partner Agreement (MPA) billing accounts and billing profiles. The Customers view can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) menu in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview. -
-<a name="anomalyalerts"></a>
-
-## Anomaly detection alerts
-
-Get notified by email when a cost anomaly is detected on your subscription.
-
-Anomaly detection is available for Azure global subscriptions in the cost analysis preview.
-
-Here's an example of a cost anomaly shown in cost analysis:
--
-To configure anomaly alerts:
-
-1. Open the cost analysis preview.
-1. Navigate to **Cost alerts** and select **Add** > **Add Anomaly alert**.
--
-For more information about anomaly detection and how to configure alerts, see [Identify anomalies and unexpected changes in cost](../understand/analyze-unexpected-charges.md).
-
-**Anomaly detection is now available by default in Azure global.**
--
-<a name="homev2"></a>
-
-## Recent and pinned views in the cost analysis preview
-
-Cost analysis is your tool for interactive analytics and insights. You've seen the addition of new views and capabilities, like anomaly detection, in the cost analysis preview. However, classic cost analysis is still the best tool for quick data exploration with simple filtering and grouping. While these capabilities are coming to the preview, we're introducing a new experience that allows you to select which view you want to start with. Whether that is a preview view, a built-in view, or a custom view you created.
-
-The first time you open the cost analysis preview, you see a list of all views. When you return, you see a list of the recently used views to help you get back to where you left off quicker than ever. You can pin any view or even rename or subscribe to alerts for your saved views.
-
-**Recent and pinned views are available by default in the cost analysis preview.** Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback.
-
-<a name="aksnestedtable"></a>
-
-## Grouping SQL databases and elastic pools
-
-Get an at-a-glance view of your total SQL costs by grouping SQL databases and elastic pools. They're shown under their parent server in the cost analysis preview. This feature is enabled by default.
-
-Understanding what you're being charged for can be complicated. The best place to start for many people is the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview. It shows resources that are incurring cost. But even a straightforward list of resources can be hard to follow when a single deployment includes multiple, related resources. To help summarize your resource costs, we're trying to group related resources together. So, we're changing cost analysis to show child resources.
-
-Many Azure services use nested or child resources. SQL servers have databases, storage accounts have containers, and virtual networks have subnets. Most of the child resources are only used to configure services, but sometimes the resources have their own usage and charges. SQL databases are perhaps the most common example.
-
-SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to manually sum up the cost of the server and each individual database. As an example, you can see the **aepool** elastic pool at the top of the following list and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
-
-Here's an example showing classic cost analysis where multiple related resource costs aren't grouped.
--
-In the cost analysis preview, the child resources are grouped together under their parent resource. The grouping shows a quick, at-a-glance view of your deployment and its total cost. Using the same subscription, you can now see all three charges grouped together under the server, offering a one-line summary for your total server costs.
-
-Here's an example showing grouped resource costs with the **Grouping SQL databases and elastic pools** preview option enabled.
--
-You might also notice the change in row count. Classic cost analysis shows 53 rows where every resource is broken out on its own. The cost analysis preview only shows 25 rows. The difference is that the individual resources are being grouped together, making it easier to get an at-a-glance cost summary.
-
-In addition to SQL servers, you also see other services with child resources, like App Service, Synapse, and VNet gateways. Each is similarly shown grouped together in the cost analysis preview.
-
-**Grouping SQL databases and elastic pools is available by default in the cost analysis preview.**
--
-<a name="resourceparent"></a>
-
-## Group related resources in the cost analysis preview
-
-Group related resources, like disks under VMs or web apps under App Service plans, by adding a ΓÇ£cm-resource-parentΓÇ¥ tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources are grouped. Leave feedback to let us know how we can improve this experience further for you.
--
-Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and can't group them automatically. This experimental feature uses tags to summarize the total cost of your related resources together. You see a single row with the parent resource. When you expand the parent resource, you see each linked resource listed individually with their respective cost.
-
-As an example, let's say you have an Azure Virtual Desktop host pool configured with two VMs. Tagging the VMs and corresponding network/disk resources groups them under the host pool, giving you the total cost of the session host VMs in your host pool deployment. This example gets even more interesting if you want to also include the cost of any cloud solutions made available via your host pool.
--
-Before you link resources together, think about how you'd like to see them grouped. You can only link a resource to one parent and cost analysis only supports one level of grouping today.
-
-Once you know which resources you'd like to group, use the following steps to tag your resources:
-
-1. Open the resource that you want to be the parent.
-2. Select **Properties** in the resource menu.
-3. Find the **Resource ID** property and copy its value.
-4. Open **All resources** or the resource group that has the resources you want to link.
-5. Select the checkboxes for every resource you want to link and then select the **Assign tags** command.
-6. Specify a tag key of "cm-resource-parent" (make sure it's typed correctly) and paste the resource ID from step 3.
-7. Wait 24 hours for new usage to be sent to Cost Management with the tags. (Keep in mind resources must be actively running with charges for tags to be updated in Cost Management.)
-8. Open the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview.
-
-Wait for the tags to load in the Resources view and you should now see your logical parent resource with its linked children. If you don't see them grouped yet, check the tags on the linked resources to ensure they're set. If not, check again in 24 hours.
-
-**Grouping related resources is available by default in the cost analysis preview.**
-- <a name="chartsfeature"></a>
-## Charts in the cost analysis preview
+## Charts in the Resources view
-Charts in the cost analysis preview include a chart of daily or monthly charges for the specified date range.
+Charts in the Resources view include a chart of daily or monthly charges for the specified date range.
-
-Charts are enabled on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** Option at the bottom of the page to share feedback about the preview.
+Charts are enabled on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate cost analysis?** Option at the bottom of the page to share feedback.
<a name="cav3forecast"></a>
-## Forecast in the cost analysis preview
-
-Show the forecast for the current period at the top of the cost analysis preview.
+## Forecast in the Resources view
-The Forecast KPI can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
+Show the forecast for the current period at the top of the Resources view.
+The Forecast KPI can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate cost analysis?** option at the bottom of the page to share feedback.
<a name="recommendationinsights"></a>
-## Cost savings insights in the cost analysis preview
+## Cost savings insights
Cost insights surface important details about your subscriptions, like potential anomalies or top cost contributors. To support your cost optimization goals, cost insights now include the total cost savings available from Azure Advisor for your subscription.
-**Cost savings insights are available by default for all subscriptions in the cost analysis preview.**
-
+**Cost savings insights are available by default for all subscriptions.**
<a name="resourceessentials"></a>
Cost analysis is available from every management group, subscription, resource g
The view cost link is enabled by default in the [Azure preview portal](https://preview.portal.azure.com). -
-<a name="onlyinconfig"></a>
+<a name="newmenu"></a>
## Streamlined menu
-Cost Management includes a central management screen for all configuration settings. Some of the settings are also available directly from the Cost Management menu currently. Enabling the **Streamlined menu** option removes configuration settings from the menu.
+The Cost Management left navigation menu is organized into related sections for reporting, monitoring, optimization, and configuration settings.
-In the following image, the left menu is classic cost analysis. The right menu is the streamlined menu.
+In the following image shows the streamlined menu.
You can enable **Streamlined menu** on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Feel free to [share your feedback](https://feedback.azure.com/d365community/idea/5e0ea52c-1025-ec11-b6e6-000d3a4f07b8). As an experimental feature, we need your feedback to determine whether to release or remove the preview. - <a name="configinmenu"></a> ## Open config items in the menu
You can enable **Open config items in the menu** on the [Try preview](https://ak
[Share your feedback](https://feedback.azure.com/d365community/idea/1403a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview. - <a name="changescope"></a> ## Change scope from menu
It allows changing the scope from the menu for quicker navigation. To enable the
[Share your feedback](https://feedback.azure.com/d365community/idea/e702a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
-## Reservation utilization alerts
+## Currency switcher in Cost analysis smart views
-[Azure reservations](../reservations/save-compute-costs-reservations.md) can provide cost savings by committing to one-year or three-year plans. However, reservations can sometimes go unutilized or underutilized, resulting in financial losses. As a [billing account](../reservations/reservation-utilization.md#view-utilization-as-billing-administrator) or [reservation user](../reservations/reservation-utilization.md#view-utilization-in-the-azure-portal-with-azure-rbac-access), you can [review the utilization percentage](../reservations/reservation-utilization.md) of your reservation purchases in the Azure portal, but you might miss out important changes. By enabling reservation utilization alerts, you solve this by receiving email notifications whenever any of your reservations exhibit low utilization. This allows you to take prompt action and optimize your reservation purchases for maximum efficiency.
+<a name="customizev3currency"></a>
-The alert email provides essential information including top unutilized reservations and a hyperlink to the list of reservations. By promptly optimizing your reservation purchases, you can avoid financial losses and ensure that your investments are delivering the expected cost savings. For more information, see [Reservation utilization alerts](reservation-utilization-alerts.md).
+View your non-USD charges in USD or switch between the currencies you have charges in to view the total cost for that currency only. To change currency, select **Customize** at the top of the view and select the currency that you want to apply. Currency selection is only available when you have charges in multiple currencies.
+Enable the currency switcher on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Select **How would you rate cost analysis?** at the bottom of the page to share feedback about the preview.
## How to share feedback
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
description: This article explains how to group costs using tag inheritance. Previously updated : 04/17/2023 Last updated : 08/04/2023
You can enable the tag inheritance setting in the Azure portal. You apply the se
1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing). 1. Select a scope.
-1. In the left menu under **Settings**, select **Manage billing account**.
+1. In the left menu under **Settings**, select **Configuration**.
1. Under **Tag inheritance**, select **Edit**. :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance for an EA billing account." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance.png" :::
-1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+1. In the Tag inheritance window, select **Automatically apply subscription and resource group tags to new data**.
:::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a billing account." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png"::: ### To enable tag inheritance in the Azure portal for an MCA billing profile
You can enable the tag inheritance setting in the Azure portal. You apply the se
1. In the left menu under **Settings**, select **Manage billing profile**. 1. Under **Tag inheritance**, select **Edit**. :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance-billing-profile.png" alt-text="Screenshot showing the Edit option for Tag inheritance for an MCA billing profile." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance-billing-profile.png":::
-1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+1. In the Tag inheritance window, select **Automatically apply subscription and resource group tags to new data**.
:::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-billing-profile.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a billing profile." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-billing-profile.png"::: ### To enable tag inheritance in the Azure portal for a subscription
You can enable the tag inheritance setting in the Azure portal. You apply the se
1. In the left menu under **Settings**, select **Manage subscription**. 1. Under **Tag inheritance**, select **Edit**. :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance-subscription.png" alt-text="Screenshot showing the Edit option for Tag inheritance for a subscription." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance-subscription.png":::
-1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+1. In the Tag inheritance window, select **Automatically apply subscription and resource group tags to new data**.
:::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-subscription.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a subscription." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-subscription.png"::: ## Choose between resource and inherited tags
cost-management-billing Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/group-filter.md
description: This article explains how to use group and filter options. Previously updated : 03/06/2023 Last updated : 08/10/2023
Some filters are only available to specific offers. For example, a billing profi
For more information about terms, see [Understand the terms used in the Azure usage and charges file](../understand/understand-usage.md).
+## Grouping SQL databases and elastic pools
+
+Get an at-a-glance view of your total SQL costs by grouping SQL databases and elastic pools. They're shown under their parent server in the Resources view.
+
+Understanding what you're being charged for can be complicated. The best place to start for many people is the [Resources view](https://aka.ms/costanalysis/resources). It shows resources that are incurring cost. But even a straightforward list of resources can be hard to follow when a single deployment includes multiple, related resources. To help summarize your resource costs, we're trying to group related resources together. So, we're changing cost analysis to show child resources.
+
+Many Azure services use nested or child resources. SQL servers have databases, storage accounts have containers, and virtual networks have subnets. Most of the child resources are only used to configure services, but sometimes the resources have their own usage and charges. SQL databases are perhaps the most common example.
+
+SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to manually sum up the cost of the server and each individual database. As an example, you can see the **treyanalyticsengine / aepool** elastic pool in the following list and the **treyanalyticsengine / coreanalytics** server under it. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
+
+Here's an example showing the Cost by resource view where multiple related resource costs aren't grouped.
++
+In the Resources view, the child resources are grouped together under their parent resource. The grouping shows a quick, at-a-glance view of your deployment and its total cost. Using the same subscription, you can now see all three charges grouped together under the server, offering a one-line summary for your total server costs.
+
+Here's an example showing grouped resource costs in the Resources view.
++
+You might also notice the change in row count. Classic cost analysis shows 53 rows where every resource is broken out on its own. The Resources view only shows 25 rows. The difference is that the individual resources are being grouped together, making it easier to get an at-a-glance cost summary.
+
+In addition to SQL servers, you also see other services with child resources, like App Service, Synapse, and VNet gateways. Each is similarly shown grouped together in the Resources view.
+
+**Grouping SQL databases and elastic pools is available by default in the Resources view.**
+
+<a name="resourceparent"></a>
+
+## Group related resources in the Resources view
+
+Group related resources, like disks under VMs or web apps under App Service plans, by adding a `cm-resource-parent` tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources are grouped. Leave feedback to let us know how we can improve this experience further for you.
+
+Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and can't group them automatically. This feature uses tags to summarize the total cost of your related resources together. You see a single row with the parent resource. When you expand the parent resource, you see each linked resource listed individually with their respective cost.
+
+As an example, let's say you have an Azure Virtual Desktop host pool configured with two VMs. Tagging the VMs and corresponding network/disk resources groups them under the host pool, giving you the total cost of the session host VMs in your host pool deployment. This example gets even more interesting if you want to also include the cost of any cloud solutions made available via your host pool.
++
+Before you link resources together, think about how you'd like to see them grouped. You can only link a resource to one parent and cost analysis only supports one level of grouping today.
+
+Once you know which resources you'd like to group, use the following steps to tag your resources:
+
+1. Open the resource that you want to be the parent.
+2. Select **Properties** in the resource menu.
+3. Find the **Resource ID** property and copy its value.
+4. Open **All resources** or the resource group that has the resources you want to link.
+5. Select the checkboxes for every resource you want to link and then select the **Assign tags** command.
+6. Specify a tag key of `cm-resource-parent` (make sure it's typed correctly) and paste the resource ID from step 3.
+7. Wait 24 hours for new usage to be sent to Cost Management with the tags. (Keep in mind resources must be actively running with charges for tags to be updated in Cost Management.)
+8. Open the [Resources view](https://aka.ms/costanalysis/resources).
+
+Wait for the tags to load in the Resources view and you should now see your logical parent resource with its linked children. If you don't see them grouped yet, check the tags on the linked resources to ensure they're set. If not, check again in 24 hours.
+
+**Grouping related resources is available by default in the Resources view.**
++ ## Publisher Type value changes In Cost Management, the `PublisherType field` indicates whether charges are for Microsoft, Marketplace, or AWS (if you have a [Cross Cloud connector](aws-integration-set-up-configure.md) configured) products.
cost-management-billing Overview Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/overview-cost-management.md
keywords:
Previously updated : 06/12/2023 Last updated : 08/07/2023
Once your resources and subscriptions are organized using the subscription hiera
How you organize and allocate costs plays a huge role in how people within your organization can manage and optimize costs. Be sure to plan ahead and revisit your allocation strategy yearly. - ## Monitor costs with alerts Cost Management and Billing offer many different types of emails and alerts to keep you informed and help you proactively manage your account and incurred costs. - [**Budget alerts**](tutorial-acm-create-budgets.md) notify recipients when cost exceeds a predefined cost or forecast amount. Budgets can be visualized in cost analysis and are available on every scope supported by Cost Management. Subscription and resource group budgets can also be configured to notify an action group to take automated actions to reduce or even stop further charges.-- [**Anomaly alerts**](../understand/analyze-unexpected-charges.md)notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis preview. Anomaly alerts can be configured from the cost alerts page.
+- [**Anomaly alerts**](../understand/analyze-unexpected-charges.md) notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within Cost analysis smart views. Anomaly alerts can be configured from the cost alerts page.
- [**Scheduled alerts**](save-share-views.md#subscribe-to-scheduled-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV. - **EA commitment balance alerts** are automatically sent to any notification contacts configured on the EA billing account when the balance is 90% or 100% used. - **Invoice alerts** can be configured for MCA billing profiles and Microsoft Online Services Program (MOSP) subscriptions. For details, see [View and download your Azure invoice](../understand/download-azure-invoice.md).
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Start using Cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 03/10/2023 Last updated : 08/10/2023
Cost analysis is your tool for interactive analytics and insights. It should be
- Amortized reservation usage. - Cost trends over time.
-Depending on how you access Cost analysis, you may see two options. If available, we recommend starting with **Cost analysis (preview)** since you can access all views from one central page.
- The first time you open Cost analysis, you start with either a list of available cost views or a customizable area chart. This section walks through the list of views. If Cost analysis shows an area chart by default, see [Analyze costs with customizable views](#analyze-costs-with-customizable-views). Cost analysis has two types of views: **smart views** that offer intelligent insights and more details by default and **customizable views** you can edit, save, and share to meet your needs. Smart views open in tabs in Cost analysis. To open a second view, select the **+** symbol to the right of the list of tabs. You can open up to five tabs at one time. Customizable views open outside of the tabs in the custom view editor.
If showing three months or less, the Average cost API compares the cost from the
We recommend checking your cost weekly to ensure each KPI remains within the expected range. If you recently deployed or changed resources, we recommend checking daily for the first week or two to monitor the cost changes. > [!NOTE]
-> If you want to monitor your forecasted cost, you can enable the [Forecast KPI preview feature](enable-preview-features-cost-management-labs.md#forecast-in-the-cost-analysis-preview) in Cost Management Labs, available from the **Try preview** command.
+> If you want to monitor your forecasted cost, you can enable the [Forecast KPI preview feature](enable-preview-features-cost-management-labs.md#forecast-in-the-resources-view) in Cost Management Labs, available from the **Try preview** command.
If you don't have a budget, select the **create** link in the **Budget** KPI and specify the amount you expect to stay under each month. To create a quarterly or yearly budget, select the **Configure advanced settings** link.
cost-management-billing Reservation Utilization Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reservation-utilization-alerts.md
Title: Reservation utilization alerts - Preview
+ Title: Reservation utilization alerts
description: This article helps you set up and use reservation utilization alerts. Previously updated : 05/17/2023 Last updated : 08/10/2023
-# Reservation utilization alerts - Preview
+# Reservation utilization alerts
This article helps you set up and use reservation utilization alerts. The alerts are email notifications that you receive when reservations have low utilization. [Azure reservations](../reservations/save-compute-costs-reservations.md) can provide cost savings by committing to one-year or three-year plans. However, it's possible for reservations to go unutilized or underutilized, resulting in financial losses. If you have [Azure RBAC](../reservations/reservation-utilization.md#view-utilization-in-the-azure-portal-with-azure-rbac-access) permissions on the reservations or if you're a [billing administrator](../reservations/reservation-utilization.md#view-utilization-as-billing-administrator), you can [review](../reservations/reservation-utilization.md) the utilization percentage of your reservation purchases in the Azure portal. With reservation utilization alerts, you can promptly take remedial actions to ensure optimal utilization of your reservation purchases.
For more information, see [scopes and roles](understand-work-scopes.md).
## Manage an alert rule
->[!NOTE]
-> During the preview, enable the feature in [cost management labs](https://azure.microsoft.com/blog/azure-cost-management-updates-july-2019#labs). Select **Reservation utilization** alert. For more information, see [Explore preview features](enable-preview-features-cost-management-labs.md#explore-preview-features).
- To create a reservation utilization alert rule: 1. Sign into the Azure portal at <https://portal.azure.com>
cost-management-billing Save Share Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/save-share-views.md
description: This article explains how to save and share a customized view with others. Previously updated : 06/28/2023 Last updated : 08/07/2023
When you want to share information with others that don't have access to the sco
:::image type="content" source="./media/save-share-views/download.png" alt-text="Screen shot showing the Download page." lightbox="./media/save-share-views/download.png" :::
-When you download data, cost analysis includes summarized data as it's shown in the table. The cost by resource view includes all resource meters in addition to the resource details. If you want a download of only resources and not the nested meters, use the cost analysis preview. You can access the preview from the **Cost by resource** menu at the top of the page, where you can select the Resources, Resource groups, Subscriptions, Services, or Reservations view.
+When you download data, cost analysis includes summarized data as it's shown in the table. The cost by resource view includes all resource meters in addition to the resource details. If you want a download of only resources and not the nested meters, use the Resources smart view. You can access the Resources view from the **Cost by resource** menu at the top of the page, where you can select the Resources, Resource groups, Subscriptions, Services, or Reservations view.
If you need more advanced summaries or you're interested in raw data that hasn't been summarized, schedule an export to publish raw data to a storage account on a recurring basis.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 08/08/2023 Last updated : 08/14/2023
Each export creates a new file, so older exports aren't overwritten.
#### Create an export for multiple subscriptions
-If you have an Enterprise Agreement, then you can use a management group to aggregate subscription cost information in a single container. Then you can export cost management data for the management group. When you create an export in the Azure portal, select the **Actual Costs** option. When you create a management group export using the API, create a *usage export*. Currently, exports at the management group scope only support usage charges. Purchases including reservations and savings plans aren't present in your exports file.
+If you have an Enterprise Agreement, then you can use a management group to aggregate subscription cost information in a single container. Then you can export cost management data for the management group. When you create an export in the Azure portal, select the **Actual Costs** option. When you create a management group export using the API, create a *usage export*.
+
+Currently, exports at the management group scope only support usage charges. Purchases including reservations and savings plans aren't present in your exports file.
Exports for management groups of other subscription types aren't supported.
+Multiple currencies are not supported in management group exports.
+ 1. If you haven't already created a management group, create one group and assign subscriptions to it. 1. In cost analysis, set the scope to your management group and select **Select this management group**. :::image type="content" source="./media/tutorial-export-acm-data/management-group-scope.png" alt-text="Example showing the Select this management group option" lightbox="./media/tutorial-export-acm-data/management-group-scope.png":::
Select an export to view the run history.
If you've created a daily export, you have two runs per day for the first five days of each month. One run executes and creates a file with the current monthΓÇÖs cost data. It's the run that's available for you to see in the run history. A second run also executes to create a file with all the costs from the prior month. The second run isn't currently visible in the run history. Azure executes the second run to ensure that your latest file for the past month contains all charges exactly as seen on your invoice. It runs because there are cases where latent usage and charges are included in the invoice up to 72 hours after the calendar month has closed. To learn more about Cost Management usage data updates, see [Cost and usage data updates and retention](understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention).
+>[!NOTE]
+ > Daily export created between 1st to 5th of the current month would not generate data for the previous month as the export schedule starts from the date of creation.
+ ## Access exported data from other systems One of the purposes of exporting your Cost Management data is to access the data from external systems. You might use a dashboard system or other financial system. Such systems vary widely so showing an example wouldn't be practical. However, you can get started with accessing your data from your applications at [Introduction to Azure Storage](../../storage/common/storage-introduction.md).
cost-management-billing Capabilities Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-allocation.md
Cost allocation is usually an afterthought and requires some level of cleanup wh
- Enable [tag inheritance in Cost Management](../costs/enable-tag-inheritance.md) to copy subscription and resource group tags in cost data only. It doesn't change tags on your resources. - Use Azure Policy to [enforce your tagging strategy](../../azure-resource-manager/management/tag-policies.md), automate the application of tags at scale, and track compliance status. Use compliance as a KPI for your tagging strategy. - If you need to move costs between subscriptions, resource groups, or add or change tags, [configure allocation rules in Cost Management](../costs/allocate-costs.md). Cost allocation is covered in detail at [Managing shared costs](capabilities-shared-cost.md).
- - Consider [grouping related resources together with the ΓÇ£cm-resource-parentΓÇ¥ tag](../costs/enable-preview-features-cost-management-labs.md#group-related-resources-in-the-cost-analysis-preview) to view costs together in Cost analysis.
+ - Consider [grouping related resources together with the ΓÇ£cm-resource-parentΓÇ¥ tag](../costs/group-filter.md#group-related-resources-in-the-resources-view) to view costs together in Cost analysis.
- Distribute responsibility for any remaining change to scale out and drive efficiencies. - Make note of any unallocated costs or costs that should be split but couldn't be. You cover it as part of [Managing shared costs](capabilities-shared-cost.md).
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
Previously updated : 12/06/2022 Last updated : 08/10/2023 ms.devlang: azurecli
ms.devlang: azurecli
Microsoft partners who are Power Platform and Dynamics 365 Customer Insights service providers work with their customers to manage, configure, and support Power Platform and Customer Insights resources. To get credit for the services, you can associate your partner network ID with the Azure credential used for service delivery that's in your customersΓÇÖ production environments using the Partner Admin Link (PAL).
-PAL allows Microsoft to identify and recognize partners that have Power Platform and Customer Insights customers. Microsoft attributes usage to a partner's organization based on the account's permissions (user role) and scope (tenant, resource, and so on). The attribution is used for Specializations, such as the [Microsoft Low Code Advanced Specializations](https://partner.microsoft.com/membership/advanced-specialization#tab-content-2), and [Partner Incentives](https://partner.microsoft.com/asset/collection/microsoft-commerce-incentive-resources#/).
+PAL allows Microsoft to identify and recognize partners that have Power Platform and Customer Insights customers. Microsoft attributes usage to a partner's organization based on the account's permissions (user role) and scope (tenant, resource, and so on). The attribution is used for Specializations, including:
+
+- [Microsoft Low Code Application Development Specialization](https://partner.microsoft.com/partnership/specialization/low-code-application-development)
+- [Microsoft Intelligent Automation Specialization](https://partner.microsoft.com/partnership/specialization/intelligent-automation)
+- [Partner Incentives](https://partner.microsoft.com/asset/collection/microsoft-commerce-incentive-resources#/)
The following sections explain how to:
The following sections explain how to:
2. **Registration** - link your access account to your partner ID 3. **Attribution** - attribute your service account to the Power Platform & Dynamics Customer Insights resources using Solutions
-We recommend taking these actions in the sequence above.
+We recommend taking these actions in the preceding order.
-The attribution step is critical and typically happens automatically, as the partner user is the one creating, editing, and updating the resource (i.e. the Power App application, the Power Automate flow, etc.). To ensure success, we strongly recommend that you use Solutions where available to import your deliverables into the customers Production Environment via a Managed Solution. When you use Solutions, the account used to import the Solution becomes the owner of each deliverable inside the Solution. Linking the account to your partner ID ensures all deliverables inside the Solution are associated to your partner ID, automatically handling step #3 above.
+The attribution step is critical and typically happens automatically, as the partner user is the one creating, editing, and updating the resource. For example, the Power App application, the Power Automate flow, and so on. To ensure success, we strongly recommend that you use Solutions where available to import your deliverables into the customers Production Environment via a Managed Solution. When you use Solutions, the account used to import the Solution becomes the owner of each deliverable inside the Solution. Linking the account to your partner ID ensures all deliverables inside the Solution are associated to your partner ID, automatically handling the preceding step #3.
> [!NOTE]
-> Solutions are not available for Power BI and Customer Insights. See detailed sections below.
+> Solutions are not available for Power BI and Customer Insights. See the following detailed sections.
## Initiation - get service account from your customer
For more information about using PowerShell or the Azure CLI, see sections under
To count the usage of a specific resource, the partner service account needs to be attributed to the *resource* for Power Platform or Dynamics Customer Insights.
-To ensure success, we strongly recommend that you use [Solutions](/power-apps/maker/data-platform/solutions-overview) where available to import your deliverables into the customers Production Environment via a Managed Solution. Use the Service account to install these Solutions into production environments. The last account with a PAL Association to import the solution will assume ownership of all objects inside the Solution and receive the usage credit.
+To ensure success, we strongly recommend that you use [Solutions](/power-apps/maker/data-platform/solutions-overview) where available to import your deliverables into the customers Production Environment via a Managed Solution. Use the Service account to install these Solutions into production environments. The last account with a PAL Association to import the solution assumes ownership of all objects inside the Solution and receive the usage credit.
[Attributing the account to Power Platform & Customer Insights resources using Solutions](https://aka.ms/AttributetoResources)
-The resource and attribute user logic differ for every product and are detailed below.
+The resource and attribute user logic differ for every product.
| Product | Primary Metric | Resource | Attributed User Logic | |||||
The resource and attribute user logic differ for every product and are detailed
## Validation
-The operation of a PAL association is a Boolean operation. Once performed it can be verified visually in the Azure portal or with a PowerShell Command. Either option will show your organization name and Partner ID to represent the account and partner ID were correctly connected.
--
+The operation of a PAL association is a Boolean operation. Once performed it can be verified visually in the Azure portal or with a PowerShell Command. Either option shows your organization name and Partner ID to represent the account and partner ID were correctly connected.
## Alternate approaches
-The following sections are alternate approaches that you can use to leverage PAL for Power Platform and Customer Insights.
+The following sections are alternate approaches to use PAL for Power Platform and Customer Insights.
-### Associate PAL with user accounts
+### Associate PAL with user accounts
-The Attribution step can also be completed with **user accounts**. While we are including this as an option, there are some downsides to this approach. For partners with a large number of users, it will require management of user accounts when users are new to the team and/or resign from the team. If you choose to associate PAL in this way, you will need to manage the users via a spreadsheet.
+The Attribution step can also be completed with **user accounts**. Although it's an option, there are some downsides to the approach. For partners with a large number of users, it requires management of user accounts when users are new to the team and/or resign from the team. If you choose to associate PAL in this way, you need to manage the users via a spreadsheet.
-To Associate PAL with User Accounts, follow the same steps as with Service Accounts but do so for each user.
+To Associate PAL with User Accounts, follow the same steps as with Service Accounts but do so for each user.
Other points about products:
Other points about products:
### Tooling to update or change attributed users
-The following table shows the tooling compatibility to change the owner or co-owner, as described above, **user accounts or dedicated service accounts** after the application has been created.
+The following table shows the tooling compatibility to change the owner or co-owner, as described previously, **user accounts or dedicated service accounts** after the application has been created.
| Product | GUI | PowerShell | PP CLI | DevOps + Build Tools | | | | | | |
cost-management-billing View Amortized Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-amortized-costs.md
Previously updated : 06/30/2023 Last updated : 08/07/2023
To charge back or show back costs for a benefit, you need to know which resource
8. To see the cost more easily for individual resources, select **Table** in the chart list. Expand items as needed. Here's an example for November 2019 showing the amortized reservation costs for the eight resources that used the reservation. The highlighted cost is the unused portion of the reservation. :::image type="content" source="./media/view-amortized-costs/reservation-cost-resource-table.png" alt-text="Screenshot showing the amortized cost of all resources that used a reservation for a specific month." lightbox="./media/view-amortized-costs/reservation-cost-resource-table.png" :::
-Another easy way to view reservation amortized cost is to use the **Reservations** preview view. To easily navigate to it, in Cost analysis in the top menu under **Cost by resource**, select the **Reservations (preview)** view.
+Another easy way to view reservation amortized cost is to use the **Reservations** view. To easily navigate to it, in Cost analysis in the top menu select **Views**, and then select the **Reservations** smart view.
## Next steps
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Previously updated : 07/07/2023 Last updated : 08/09/2023
Whether you know if you have any existing cost anomalies or not, Cost analysis i
### View anomalies in Cost analysis
-Anomaly detection is available in Cost analysis (preview) when you select a subscription scope. You can view your anomaly status as part of **[Insights](https://azure.microsoft.com/blog/azure-cost-management-and-billing-updates-february-2021/#insights)**.
+Anomaly detection is available in Cost analysis smart views when you select a subscription scope. You can view your anomaly status as part of **[Insights](https://azure.microsoft.com/blog/azure-cost-management-and-billing-updates-february-2021/#insights)**.
-In the Azure portal, navigate to Cost Management from Azure Home. Select a subscription scope and then in the left menu, select **Cost analysis**. In the view list, select any view under **Preview views**. In the following example, the **Resources** preview view is selected. If you have a cost anomaly, you see an insight.
+In the Azure portal, navigate to Cost Management from Azure Home. Select a subscription scope and then in the left menu, select **Cost analysis**. In the view list, select any view under **Smart views**. In the following example, the **Resources** smart view is selected. If you have a cost anomaly, you see an insight.
:::image type="content" source="./media/analyze-unexpected-charges/insight-recommendation-01.png" alt-text="Example screenshot showing an insight." lightbox="./media/analyze-unexpected-charges/insight-recommendation-01.png" :::
If you don't have any anomalies, you see a **No anomalies detected** insight, co
:::image type="content" source="./media/analyze-unexpected-charges/insight-no-anomalies.png" alt-text="Example screenshot showing No anomalies detected message." lightbox="./media/analyze-unexpected-charges/insight-no-anomalies.png" :::
-Anomalies in Cost analysis identify the detection date and continue to display up to 60 days. If the anomaly is still active, it's updated daily. If the anomaly is no longer active, it's removed from the list after 60 days.
- ### Drill into anomaly details To drill into the underlying data for something that has changed, select the insight link. It opens a view in classic cost analysis where you can review your daily usage by resource group for the time range that was evaluated.
Cost anomalies are evaluated for subscriptions daily and compare the day's total
The anomaly detection model is a univariate time-series, unsupervised prediction and reconstruction-based model that uses 60 days of historical usage for training, then forecasts expected usage for the day. Anomaly detection forecasting uses a deep learning algorithm called [WaveNet](https://www.deepmind.com/blog/wavenet-a-generative-model-for-raw-audio). It's different than the Cost Management forecast. The total normalized usage is determined to be anomalous if it falls outside the expected range based on a predetermined confidence interval.
-Anomaly detection is available to every subscription monitored using the cost analysis preview. To enable anomaly detection for your subscriptions, open the cost analysis preview and select your subscription from the scope selector at the top of the page. You see a notification informing you that your subscription is onboarded and you start to see your anomaly detection status within 24 hours.
+Anomaly detection is available to every subscription monitored using the cost analysis. To enable anomaly detection for your subscriptions, open a cost analysis smart view and select your subscription from the scope selector at the top of the page. You see a notification informing you that your subscription is onboarded and you start to see your anomaly detection status within 24 hours.
## Create an anomaly alert You can create an alert to automatically get notified when an anomaly is detected. Creating an anomaly alert requires the Cost Management Contributor or greater role or the `Microsoft.CostManagement/scheduledActions/write` permission for custom roles. For more information, see [Feature behavior for each role](../costs/understand-work-scopes.md#feature-behavior-for-each-role).
+>[!NOTE]
+> Anomaly alerts are sent based on the current access of the rule creator at the time that the email is sent. If your organization has a policy that prohibits permanently assigning higher privileges to users, you can use a service principal and create the alert directly using the [Scheduled Actions API](/rest/api/cost-management/scheduled-actions/create-or-update-by-scope#createorupdateinsightalertscheduledactionbyscope).
+ An anomaly alert email includes a summary of changes in resource group count and cost. It also includes the top resource group changes for the day compared to the previous 60 days. And, it has a direct link to the Azure portal so that you can review the cost and investigate further. An anomaly alert email is sent only one time when it's detected.
-1. From Azure Home, select **Cost Management** under Tools.
+1. From Azure Home, select **Cost Management** under **Tools**.
1. Verify you've selected the correct subscription in the scope at the top of the page. 1. In the left menu, select **Cost alerts**.
-1. On the Cost alerts page, select **+ Add** > **Add anomaly alert**.
-1. On the Subscribe to emails page, enter required information and then select **Save**.
- :::image type="content" source="./media/analyze-unexpected-charges/subscribe-emails.png" alt-text="Screenshot showing the Subscribe to emails page where you enter notification information for an alert." lightbox="./media/analyze-unexpected-charges/subscribe-emails.png" :::
+1. On the toolbar, select **+ Add**.
+1. On the Create alert rule page, select **Anomaly** as the **Alert type**.
+1. Enter all the required information, then select **Create**.
+ :::image type="content" source="./media/analyze-unexpected-charges/subscribe-emails.png" alt-text="Screenshot showing the Create alert rule page where you enter notification information for an alert." lightbox="./media/analyze-unexpected-charges/subscribe-emails.png" :::
+ You can view and manage the anomaly alert rule by navigating to **Alert rules** in the left navigation menu.
Here's an example email generated for an anomaly alert.
data-factory Better Understand Different Integration Runtime Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/better-understand-different-integration-runtime-charges.md
As the compute is reserved, the 6 copy activities aren't rounded up independentl
:::image type="content" source="./media/integration-runtime-pricing/vnet-integration-runtime-example-2.png" alt-text="Screenshot of calculation formula for Azure integration runtime with managed virtual network example 2.":::
-**Example 3: If there are 6 HDInsight activities triggered by Foreach. The execution time of each is 9 minutes and 40 seconds. The parallel is configured as 50 in Foreach. TTL is 30 minutes.**
+**Example 3: If there are 6 HDInsight activities triggered by Foreach. The execution time of each is 9 minutes and 40 seconds. The parallel is configured as 50 in Foreach. Compute size is 1. TTL is 30 minutes.**
In this example, the execution time of each HDInsight activity is rounded up to 10 minutes. As the 6 HDInsight activities run in parallel and within the concurrency limitation (800), they're only charged once.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 09/19/2022 Last updated : 08/10/2023 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in Azure Data Factory and Synapse Analytics
You should use the token issued from guest tenant. For example, you have to assi
#### Issue
-If we delete a trigger in Dev branch, which is already available in Test or Production branch with **same** configuration (like frequency and interval), then release pipeline deployment succeeds and corresponding trigger will be deleted in respective environments. But if you have **different** configuration (like frequency and interval) for trigger in Test/Production environments and if you delete the same trigger in Dev, then deployment fails with an error.
+If we delete a trigger in Dev branch, which is already available in Test or Production branch with **same** configuration (like frequency and interval), then release pipeline deployment succeeds and corresponding trigger are deleted in their respective environments. But if you have **different** configuration (like frequency and interval) for trigger in Test/Production environments and if you delete the same trigger in Dev, then deployment fails with an error.
#### Cause
CI/CD Pipeline fails with the following error:
#### Recommendation
-The error occurs because we often delete a trigger, which is parameterized, therefore, the parameters will not be available in the Azure Resource Manager (ARM) template (because the trigger does not exist anymore). Since the parameter is not in the ARM template anymore, we have to update the overridden parameters in the DevOps pipeline. Otherwise, each time the parameters in the ARM template change, they must update the overridden parameters in the DevOps pipeline (in the deployment task).
+The error occurs because we often delete a trigger, which is parameterized, therefore, the parameters won't be available in the Azure Resource Manager (ARM) template (because the trigger doesn't exist anymore). Since the parameter isn't in the ARM template anymore, we have to update the overridden parameters in the DevOps pipeline. Otherwise, each time the parameters in the ARM template change, they must update the overridden parameters in the DevOps pipeline (in the deployment task).
-### Updating property type is not supported
+### Updating property type isn't supported
#### Issue
Detach Git configuration and set it up again, and make sure NOT to check the "im
#### Issue
-You are unable to move a data factory from one Resource Group to another, failing with the following error:
+You're unable to move a data factory from one Resource Group to another, failing with the following error:
` { "code": "ResourceMoveProviderValidationFailed",
Unable to export and import ARM template. No error was on the portal, however, i
#### Cause
-You have created a customer role as the user and it did not have the necessary permission. When the UI is loaded, a series of exposure control values is checked. In this case, the user's access role does not have permission to access *queryFeaturesValue* API. To access this API, the global parameters feature is turned off. The ARM export code path is partly relying on the global parameters feature.
+You have created a customer role as the user and it didn't have the necessary permission. When the UI is loaded, a series of exposure control values is checked. In this case, the user's access role doesn't have permission to access *queryFeaturesValue* API. To access this API, the global parameters feature is turned off. The ARM template export code path is partly relying on the global parameters feature.
#### Resolution In order to resolve the issue, you need to add the following permission to your role: *Microsoft.DataFactory/factories/queryFeaturesValue/action*. This permission is included by default in the **Data Factory Contributor** role for Data Factory, and the **Contributor** role In Synapse Analytics.
-### Cannot automate publishing for CI/CD
+### Can't automate publishing for CI/CD
#### Cause
-Until recently, the it was only possible to publish a pipeline for deployments by clicking the UI in the Portal. Now, this process can be automated.
+Until recently, it was only possible to publish a pipeline for deployments by clicking the UI in the Portal. Now, this process can be automated.
#### Resolution
-CI/CD process has been enhanced. The **Automated** publish feature takes, validates, and exports all ARM template features from the UI. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This method allows you to programmatically trigger these actions instead of having to go to the UI and click a button. This method gives your CI/CD pipelines a **true** continuous integration experience. Follow [CI/CD Publishing Improvements](./continuous-integration-delivery-improvements.md) for details.
+CI/CD process has been enhanced. The **Automated** publish feature takes, validates, and exports all ARM template features from the UI. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This method allows you to programmatically trigger these actions instead of having to go to the UI and select a button. This method gives your CI/CD pipelines a **true** continuous integration experience. Follow [CI/CD Publishing Improvements](./continuous-integration-delivery-improvements.md) for details.
-### Cannot publish because of 4-MB ARM template limit
+### Can't publish because of 4-MB ARM template limit
#### Issue
For small to medium solutions, a single template is easier to understand and mai
#### Issue
-While publishing ADF resources, the azure pipeline triggers twice or more instead of once.
+While publishing resources, the Azure pipeline triggers twice or more instead of once.
#### Cause
-Azure DevOps has the 20 MB REST API limit. When the ARM template exceeds this size, ADF internally splits the template file into multiple files with linked templates to solve this issue. As a side effect, this split could result in customer's triggers being run more than once.
+Azure DevOps has the 20-MB REST API limit. When the ARM template exceeds this size, ADF internally splits the template file into multiple files with linked templates to solve this issue. As a side effect, this split could result in customer's triggers being run more than once.
#### Resolution Use ADF **Automated publish** (preferred) or **manual trigger** method to trigger once instead of twice or more.
-### Cannot connect to GIT Enterprise
+### Can't connect to GIT Enterprise
##### Issue
You can't connect to GIT Enterprise because of permission issues. You can see er
#### Resolution
-You grant Oauth access to the service at first. Then, you have to use correct URL to connect to GIT Enterprise. The configuration must be set to the customer organization(s). For example, the service will try *https://hostname/api/v3/search/repositories?q=user%3&lt;customer credential&gt;....* at first and fail. Then, it will try *https://hostname/api/v3/orgs/&lt;org&gt;/&lt;repo&gt;...*, and succeed.
+You grant Oauth access to the service at first. Then, you have to use correct URL to connect to GIT Enterprise. The configuration must be set to the customer organization(s). For example, the service tries *https://hostname/api/v3/search/repositories?q=user%3&lt;customer credential&gt;....* at first and fail. Then, it tries *https://hostname/api/v3/orgs/&lt;org&gt;/&lt;repo&gt;...*, and succeed.
-### Cannot recover from a deleted instance
+### Can't recover from a deleted instance
#### Issue An instance of the service, or the resource group containing it, was deleted and needs to be recovered. #### Cause
-It is possible to recover the instance only if source control was configured for it with DevOps or Git. This action will bring all the latest published resources, but **will not** restore any unpublished pipelines, datasets, or linked services. If there is no Source control, recovering a deleted instance from the Azure backend isn't possible because once the service receives the delete command, the instance is permanently deleted without any backup.
+It's possible to recover the instance only if source control was configured for it with DevOps or Git. This action brings all the latest published resources, but **will not** restore any unpublished pipelines, datasets, or linked services. If there's no Source control, recovering a deleted instance from the Azure backend isn't possible because once the service receives the delete command, the instance is permanently deleted without any backup.
#### Resolution
While npm packages can be consumed in various ways, one of the primary benefits
#### Resolution
-Following section is not valid because package.json folder is not valid.
+Following section isn't valid because package.json folder isn't valid.
``` - task: Npm@1
It should have DataFactory included in customCommand like *'run build validate $
### Extra left "[" displayed in published JSON file #### Issue
-When publishing with DevOps, there is an extra "[" displayed. The service adds one more "[" in an ARMTemplate in DevOps automatically. You will see an expression like "[[" in JSON file.
+When publishing with DevOps, there's an extra "[" displayed. The service adds one more "[" in an ARM template in DevOps automatically. You'll see an expression like "[[" in JSON file.
#### Cause
-Because [ is a reserved character for ARM, an extra [ is added automatically to escape "[".
+Because [ is a reserved character for ARM templates, an extra [ is added automatically to escape "[".
#### Resolution This is normal behavior during the publishing process for CI/CD.
You want to perform unit testing during development and deployment of your pipel
During development and deployment cycles, you may want to unit test your pipeline before you manually or automatically publish your pipeline. Test automation allows you to run more tests, in less time, with guaranteed repeatability. Automatically retesting all your pipelines before deployment gives you some protection against regression faults. Automated testing is a key component of CI/CD software development approaches: inclusion of automated tests in CI/CD deployment pipelines can significantly improve quality. In long run, tested pipeline artifacts are reused saving you cost and time. #### Resolution
-Because customers may have different unit testing requirements with different skillsets, usual practice is to follow following steps:
+Because customers may have different unit testing requirements with different skill sets, usual practice is to follow following steps:
1. Setup Azure DevOps CI/CD project or develop .NET/PYTHON/REST type SDK driven test strategy. 2. For CI/CD, create a build artifact containing all scripts and deploy resources in release pipeline. For an SDK driven approach, develop Test units using PyTest in Python, Nunit in C# using .NET SDK and so on. 3. Run unit tests as part of release pipeline or independently with ADF Python/PowerShell/.NET/REST SDK. For example, you want to delete duplicates in a file and then store curated file as table in a database. To test the pipeline, you set up a CI/CD project using Azure DevOps.
-You set up a TEST pipeline stage where you deploy your developed pipeline. You configure TEST stage to run Python tests for making sure table data is what you expected. If you do not use CI/CD, you use **Nunit** to trigger deployed pipelines with tests you want. Once you are satisfied with the results, you can finally publish the pipeline to a production instance.
+You set up a TEST pipeline stage where you deploy your developed pipeline. You configure TEST stage to run Python tests for making sure table data is what you expected. If you don't use CI/CD, you use **Nunit** to trigger deployed pipelines with tests you want. Once you're satisfied with the results, you can finally publish the pipeline to a production instance.
### Pipeline runs temporarily fail after CI/CD deployment or authoring updates
After some amount of time, new pipeline runs begin to succeed without any user a
#### Cause
-There are several scenarios, which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline will call the new version of the child pipeline, and the expected parameters will not be passed. This will cause the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
+There are several scenarios, which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline calls the new version of the child pipeline, and the expected parameters won't be passed. This causes the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
#### Resolution New runs of the parent pipeline will automatically begin succeeding, so typically no action is needed. However, to prevent these errors, customers should consider dependencies while authoring and planning deployments to avoid breaking changes.
-### Cannot parameterize integration run time in linked service
+### Can't parameterize integration run time in linked service
#### Issue Need to parameterize linked service integration run time
You have to select manually and set an integration runtime. You can use PowerShe
Changing Integration runtime name during CI/CD deployment. #### Cause
-Parameterizing an entity reference (Integration runtime in Linked service, Dataset in activity, Linked Service in dataset) isn't supported. Changing the runtime name during deployment will cause the depended resource (Resource referencing the Integration runtime) to become malformed with invalid reference.
+Parameterizing an entity reference (Integration runtime in Linked service, Dataset in activity, Linked Service in dataset) isn't supported. Changing the runtime name during deployment causes the depended resource (Resource referencing the Integration runtime) to become malformed with invalid reference.
#### Resolution Data Factory requires you to have the same name and type of integration runtime across all stages of CI/CD.
ARM template deployment fails with an error such as DataFactoryPropertyUpdateNot
The ARM template deployment is attempting to change the type of an existing integration runtime. This isn't allowed and will cause a deployment failure because data factory requires the same name and type of integration runtime across all stages of CI/CD. ##### Resolution
-If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. For more information, refer to [Continuous integration and delivery - Azure Data Factory](./continuous-integration-delivery.md#best-practices-for-cicd)
+If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. For more information, see [Continuous integration and delivery - Azure Data Factory](./continuous-integration-delivery.md#best-practices-for-cicd)
### GIT publish may fail because of PartialTempTemplates files #### Issue
-When you've 1000 s of old temporary ARM json files in PartialTemplates folder, publish may fail.
+When you have 1000 s of old temporary ARM template json files in PartialTemplates folder, publish may fail.
#### Cause On publish, ADF fetches every file inside each folder in the collaboration branch. In the past, publishing generated two folders in the publish branch: PartialArmTemplates and LinkedTemplates. PartialArmTemplates files are no longer generated. However, because there can be many old files (thousands) in the PartialArmTemplates folder, this may result in many requests being made to GitHub on publish and the rate limit being hit.
On publish, ADF fetches every file inside each folder in the collaboration branc
#### Resolution Delete the PartialTemplates folder and republish. You can delete the temporary files in that folder as well.
-### Include global parameters in ARM template option does not work
+### Include global parameters in ARM template option doesn't work
#### Issue
-If you are using old default parameterization template, new way to include global paramaters from **Manage Hub** will not work.
+If you're using old default parameterization template, new way to include global parameters from **Manage Hub** won't work.
#### Cause Default parameterization template should include all values from global parameter list. #### Resolution
-* Use updated [default parameterization template.](./continuous-integration-delivery-resource-manager-custom-parameters.md#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
-* Update the template parameter names in CI/CD pipeline if you are already overriding the template parameters (for global parameters).
+* Use updated [default parameterization template.](./continuous-integration-delivery-resource-manager-custom-parameters.md#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you're already overriding the template parameters there.
+* Update the template parameter names in CI/CD pipeline if you're already overriding the template parameters (for global parameters).
### Error code: InvalidTemplate
Default parameterization template should include all values from global paramete
Message says *Unable to parse expression.* The expression passed in the dynamic content of an activity isn't being processed correctly because of a syntax error. #### Cause
-Dynamic content is not written as per expression language requirements.
+Dynamic content isn't written as per expression language requirements.
#### Resolution * For debug run, check expressions in pipeline within current git branch.
-* For Triggered run, check expressions in pipeline within *Live* mode .
+* For Triggered run, check expressions in pipeline within *Live* mode.
## Next steps
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
The new Change Data Capture resource in ADF allows for full fidelity change data
For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md).
+> [!NOTE]
+> We always use the last published configuration when starting a CDC. For running CDCs, while your data is being processed, you will be billed 4 v-cores of General Purpose Data Flows.
## Next steps - [Learn how to set up a change data capture resource](how-to-change-data-capture-resource.md).
data-factory Concepts Data Flow Schema Drift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-schema-drift.md
Previously updated : 09/19/2022 Last updated : 08/10/2023 # Schema drift in mapping data flow
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
Previously updated : 09/01/2022 Last updated : 08/10/2023 # Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory or Azure Synapse Analytics
Specifically, with this connector you can:
Use the following steps to create a linked service to Azure Data Lake Storage Gen1 in the Azure portal UI.
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
# [Azure Data Factory](#tab/data-factory)
The following properties are supported for Azure Data Lake Store Gen1 under `sto
| OPTION 2: name range<br/>- listBefore | Retrieve the folders/files whose name is before this value alphabetically (inclusive). It utilizes the service-side filter for ADLS Gen1, which provides better performance than a wildcard filter.<br>The service applies this filter to the path defined in dataset, and only one entity level is supported. See more examples in [Name range filter examples](#name-range-filter-examples). | No | | OPTION 3: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | | OPTION 3: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
-| OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, do not specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
+| OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, don't specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
| ***Additional settings:*** | | |
-| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
-| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No |
-| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
+| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. When recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
+| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No |
+| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files are selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter is applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value is selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value is selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
| modifiedDatetimeEnd | Same as above. | No | | enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No |
-| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it is not specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path is not specified, no extra column will be generated. | No |
+| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the subpath before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity generates two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column is generated. | No |
| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | **Example:**
The following properties are supported for Azure Data Lake Store Gen1 under `sto
| | | -- | | type | The type property under `storeSettings` must be set to **AzureDataLakeStoreWriteSettings**. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
-| expiryDateTime | Specifies the expiry time of the written files. The time is applied to the UTC time in the format of "2020-03-01T08:00:00Z". By default it is NULL, which means the written files are never expired. | No |
+| expiryDateTime | Specifies the expiry time of the written files. The time is applied to the UTC time in the format of "2020-03-01T08:00:00Z". By default it's NULL, which means the written files are never expired. | No |
| maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | **Example:**
This section describes the resulting behavior of name range filters.
| Sample source structure | Configuration | Result | |: |: |: |
-|root<br/>&nbsp;&nbsp;&nbsp;&nbsp;a<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file2.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;b<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;bx.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;c<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file4.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;cx.csv| **In dataset:**<br>- Folder path: `root`<br><br>**In copy activity source:**<br>- List after: `a`<br>- List before: `b`| Then the following files will be copied:<br><br>root<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file2.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;b<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file3.csv |
+|root<br/>&nbsp;&nbsp;&nbsp;&nbsp;a<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file2.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;b<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;bx.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;c<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file4.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;cx.csv| **In dataset:**<br>- Folder path: `root`<br><br>**In copy activity source:**<br>- List after: `a`<br>- List before: `b`| Then the following files are copied:<br><br>root<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file2.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;ax.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;b<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;file3.csv |
### Folder and file filter examples
First, set a wildcard to include all paths that are the partitioned folders plus
:::image type="content" source="media/data-flow/part-file-2.png" alt-text="Screenshot of partition source file settings in mapping data flow source transformation.":::
-Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
+Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you see that the service adds the resolved partitions found in each of your folder levels.
:::image type="content" source="media/data-flow/partfile1.png" alt-text="Partition root path":::
Use the Partition Root Path setting to define what the top level of the folder s
**After completion:** Choose to do nothing with the source file after the data flow runs, delete the source file, or move the source file. The paths for the move are relative.
-To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting will be the same folder as your source folder.
+To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting is the same folder as your source folder.
-If you have a source path with wildcard, your syntax will look like this below:
+If you have a source path with wildcard, your syntax looks like this below:
`/data/sales/20??/**/*.csv`
In this case, all files that were sourced under /data/sales are moved to /backup
**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All date-times are in UTC.
-**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture-preview).
+**Enable change data capture:** If true, you'll get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture-preview).
:::image type="content" source="media/data-flow/enable-change-data-capture.png" alt-text="Screenshot showing Enable change data capture.":::
In the sink transformation, you can write to either a container or folder in Azu
**File name option:** Determines how the destination files are named in the destination folder. The file name options are: * **Default**: Allow Spark to name files based on PART defaults.
- * **Pattern**: Enter a pattern that enumerates your output files per partition. For example, **loans[n].csv** will create loans1.csv, loans2.csv, and so on.
+ * **Pattern**: Enter a pattern that enumerates your output files per partition. For example, **loans[n].csv** creates loans1.csv, loans2.csv, and so on.
* **Per partition**: Enter one file name per partition.
- * **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it will be overridden.
- * **Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Please be aware that the merge operation can possibly fail based upon node size. This option is not recommended for large datasets.
+ * **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it is overridden.
+ * **Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Be aware that the merge operation can possibly fail based upon node size. This option isn't recommended for large datasets.
**Quote all:** Determines whether to enclose all values in quotes
To learn details about the properties, check [Delete activity](delete-activity.m
Azure Data Factory can get new or changed files only from Azure Data Lake Storage Gen1 by enabling **Enable change data capture (Preview)** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice.
-Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can always be recorded from the last run to get changes from there. If you change your pipeline name or activity name, the checkpoint will be reset, and you will start from the beginning in the next run.
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can always be recorded from the last run to get changes from there. If you change your pipeline name or activity name, the checkpoint will be reset, and you'll start from the beginning in the next run.
-When you debug the pipeline, the **Enable change data capture (Preview)** works as well. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the result from debug run, you can publish and trigger the pipeline. It will always start from the beginning regardless of the previous checkpoint recorded by debug run.
+When you debug the pipeline, the **Enable change data capture (Preview)** works as well. The checkpoint is reset when you refresh your browser during the debug run. After you're satisfied with the result from debug run, you can publish and trigger the pipeline. It will always start from the beginning regardless of the previous checkpoint recorded by debug run.
-In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
+In the monitoring section, you always have the chance to rerun a pipeline. When you're doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
## Next steps
data-factory Connector Google Sheets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-sheets.md
Previously updated : 08/30/2022 Last updated : 08/10/2023 # Transform data in Google Sheets (Preview) using Azure Data Factory or Synapse Analytics
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md
Previously updated : 08/30/2022 Last updated : 08/10/2023
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle.md
Previously updated : 09/15/2022 Last updated : 08/10/2023
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 09/14/2022 Last updated : 08/10/2023
Specifically, this generic REST connector supports:
Use the following steps to create a REST linked service in the Azure portal UI.
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
# [Azure Data Factory](#tab/data-factory)
Set the **authenticationType** property to **ManagedServiceIdentity**. In additi
| Property | Description | Required | |: |: |: |
-| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
+| aadResourceId | Specify the Microsoft Azure Active Directory resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
**Example**
AlterRow1 sink(allowSchemaDrift: true,
## Pagination support
-When copying data from REST APIs, normally, the REST API limits its response payload size of a single request under a reasonable number; while to return large amount of data, it splits the result into multiple pages and requires callers to send consecutive requests to get next page of the result. Usually, the request for one page is dynamic and composed by the information returned from the response of previous page.
+When you copy data from REST APIs, normally, the REST API limits its response payload size of a single request under a reasonable number; while to return large amount of data, it splits the result into multiple pages and requires callers to send consecutive requests to get next page of the result. Usually, the request for one page is dynamic and composed by the information returned from the response of previous page.
This generic REST connector supports the following pagination patterns:
The pagination rules should be set as the following screenshot:
:::image type="content" source="media/connector-rest/pagination-rule-example-8.png" alt-text="Screenshot showing how to set the pagination rule for Example 8.":::
-By default, the pagination will stop when body **.{@odata.nextLink}** is null or empty.
+By default, the pagination will stop when body.{@odata.nextLink}** is null or empty.
But if the value of **@odata.nextLink** in the last response body is equal to the last request URL, then it will lead to the endless loop. To avoid this condition, define end condition rules.
But if the value of **@odata.nextLink** in the last response body is equal to th
#### Example 9: The response format is XML and the next request URL is from the response body when use pagination in mapping data flows
-This example states how to set the pagination rule in mapping data flows when the response format is XML and the next request URL is from the response body. As shown in the following screenshot, the first URL is *https://\<user\>.dfs.core.windows.net/bugfix/test/movie_1.xml*
+This example states how to set the pagination rule in mapping data flows when the response format is XML and the next request URL is from the response body. As shown in the following screenshot, the first URL is *https://\<user\>.dfs.core.windows.NET/bugfix/test/movie_1.xml*
:::image type="content" source="media/connector-rest/pagination-rule-example-9-situation.png" alt-text="Screenshot showing the response format is X M L and the next request U R L is from the response body.":::
data-factory Continuous Integration Delivery Manual Promotion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-manual-promotion.md
Previously updated : 09/20/2022 Last updated : 08/10/2023
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
Previously updated : 09/28/2022- Last updated : 08/11/2023 # Use custom parameters with the Resource Manager template
data-factory Control Flow Get Metadata Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-get-metadata-activity.md
Previously updated : 09/20/2022 Last updated : 08/10/2023
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
Previously updated : 09/20/2022 Last updated : 08/10/2023
To copy data from a source to a sink, the service that runs the Copy activity pe
:::image type="content" source="media/copy-activity-overview/copy-activity-overview.png" alt-text="Copy activity overview"::: > [!NOTE]
-> In case if a self-hosted integration runtime is used in either source or sink data store within a copy activity, than both the source and sink must be accessible from the server hosting the integartion runtime for the copy activity to be successful.
+> If a self-hosted integration runtime is used in either a source or sink data store within a Copy activity, then both the source and sink must be accessible from the server hosting the integartion runtime for the Copy activity to be successful.
## Supported data stores and formats
See [Schema and data type mapping](copy-activity-schema-and-type-mapping.md) for
In addition to copying data from source data store to sink, you can also configure to add additional data columns to copy along to sink. For example: -- When copy from file-based source, store the relative file path as an additional column to trace from which file the data comes from.-- Duplicate the specified source column as another column.
+- When you copy from a file-based source, store the relative file path as an additional column to trace from which file the data comes from.
+- Duplicate the specified source column as another column.
- Add a column with ADF expression, to attach ADF system variables like pipeline name/pipeline ID, or store other dynamic value from upstream activity's output. - Add a column with static value to meet your downstream consumption need.
To configure it programmatically, add the `additionalColumns` property in your c
] ``` >[!TIP]
->After configuring additional columns remember to map them to you destination sink, in the Mapping tab.
+>After configuring additional columns remember to map them to your destination sink, in the Mapping tab.
## Auto create sink tables
-When copying data into SQL database/Azure Synapse Analytics, if the destination table does not exist, copy activity supports automatically creating it based on the source data. It aims to help you quickly get started to load the data and evaluate SQL database/Azure Synapse Analytics. After the data ingestion, you can review and adjust the sink table schema according to your needs.
+When you copy data into SQL database/Azure Synapse Analytics, if the destination table does not exist, copy activity supports automatically creating it based on the source data. It aims to help you quickly get started to load the data and evaluate SQL database/Azure Synapse Analytics. After the data ingestion, you can review and adjust the sink table schema according to your needs.
This feature is supported when copying data from any source into the following sink data stores. You can find the option on *ADF authoring UI* -> *Copy activity sink* -> *Table option* -> *Auto create table*, or via `tableOption` property in copy activity sink payload. - [Azure SQL Database](connector-azure-sql-database.md)-- [Azure SQL Database Managed Instance](connector-azure-sql-managed-instance.md)
+- [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md)
- [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md) - [SQL Server](connector-sql-server.md)
By default, the Copy activity stops copying data and returns a failure when sour
When you move data from source to destination store, copy activity provides an option for you to do additional data consistency verification to ensure the data is not only successfully copied from source to destination store, but also verified to be consistent between source and destination store. Once inconsistent files have been found during the data movement, you can either abort the copy activity or continue to copy the rest by enabling fault tolerance setting to skip inconsistent files. You can get the skipped file names by enabling session log setting in copy activity. See [Data consistency verification in copy activity](copy-activity-data-consistency.md) for details. ## Session log
-You can log your copied file names, which can help you to further ensure the data is not only successfully copied from source to destination store, but also consistent between source and destination store by reviewing the copy activity session logs. See [Session log in copy activity](copy-activity-log.md) for details.
+You can log your copied file names, which can help you to further ensure the data is not only successfully copied from source to destination store, but also consistent between source and destination store by reviewing the copy activity session logs. See [Session sign in copy activity](copy-activity-log.md) for details.
## Next steps See the following quickstarts, tutorials, and samples:
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Create a shared self-hosted integration runtime in Azure Data Factory
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Previously updated : 09/13/2022 Last updated : 08/10/2023
For connector issues such as an encounter error using the copy activity, refer t
- **Message**: `Job is not fully initialized yet. Please retry later.` -- **Cause**: The job has not initialized.
+- **Cause**: The job hasn't initialized.
- **Recommendation**: Wait and try again later.
For connector issues such as an encounter error using the copy activity, refer t
:::image type="content" source="media/data-factory-troubleshoot-guide/databricks-pipeline.png" alt-text="Screenshot of the Databricks pipeline.":::
- You noticed this change on September 28, 2021 at around 9 AM IST when your pipeline relying on this output started failing. No change was made on the pipeline, and the Boolean output had been coming as expected before the failure.
+ You noticed this change on September 28, 2021 at around 9 AM IST when your pipeline relying on this output started failing. No change was made on the pipeline, and the Boolean output data arrived as expected prior to the failure.
:::image type="content" source="media/data-factory-troubleshoot-guide/old-and-new-output.png" alt-text="Screenshot of the difference in the output.":::
The following table applies to U-SQL.
- **Cause**: The properties of the activity such as `pipelineParameters` are invalid for the Azure Machine Learning (ML) pipeline. -- **Recommendation**: Check that the value of activity properties matches the expected payload of the published Azure ML pipeline specified in Linked Service.
+- **Recommendation**: Check that the value of activity properties matches the expected payload of the published Azure Machine Learning pipeline specified in Linked Service.
### Error code: 4124 - **Message**: `Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'.` -- **Cause**: The published Azure ML pipeline endpoint doesn't exist.
+- **Cause**: The published Azure Machine Learning pipeline endpoint doesn't exist.
- **Recommendation**: Verify that the published Azure Machine Learning pipeline endpoint specified in Linked Service exists in Azure Machine Learning.
The following table applies to U-SQL.
- **Message**: `Azure ML pipeline run failed with status: '%amlPipelineRunStatus;'. Azure ML pipeline run Id: '%amlPipelineRunId;'. Please check in Azure Machine Learning for more error logs.` -- **Cause**: The Azure ML pipeline run failed.
+- **Cause**: The Azure Machine Learning pipeline run failed.
- **Recommendation**: Check Azure Machine Learning for more error logs, then fix the ML pipeline.
The following table applies to U-SQL.
- **Message**: `There are not enough vcores available for your spark job, details: '%errorMessage;'` -- **Cause**: Insufficient vcores
+- **Cause**: Insufficient virtual cores
- **Recommendation**: Try reducing the numbers of vCores requested or increasing your vCore quota. For more information, see [Apache Spark core concepts](../synapse-analytics/spark/apache-spark-concepts.md).
The following table applies to Azure Batch.
- **Cause**: The service tried to create a batch on a Spark cluster using Livy API (livy/batch), but received an error. -- **Recommendation**: Follow the error message to fix the issue. If there isn't enough information to get it resolved, contact the HDI team and provide them the batch ID and job ID, which can be found in the activity run Output in the service Monitoring page. To troubleshoot further, collect the full log of the batch job.
+- **Recommendation**: Follow the error message to fix the issue. If there isn't enough information to get it resolved, contact the HDI team and provide them with the batch ID and job ID, which can be found in the activity run Output in the service Monitoring page. To troubleshoot further, collect the full log of the batch job.
For more information on how to collect the full log, see [Get the full log of a batch job](/rest/api/hdinsightspark/hdinsight-spark-batch-job#get-the-full-log-of-a-batch-job).
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
Previously updated : 08/22/2022 Last updated : 08/10/2023
data-factory How To Change Data Capture Resource With Schema Evolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource-with-schema-evolution.md
Title: Capture changed data with schema evolution using change data capture resource
-description: This tutorial provides step-by-step instructions on how to capture changed data with schema evolution from Azure SQL DB to Delta sink using a change data capture resource.
+ Title: Capture changed data with schema evolution by using a change data capture resource
+description: Get step-by-step instructions on how to capture changed data with schema evolution from Azure SQL Database to a Delta sink by using a change data capture (CDC) resource.
Last updated 07/21/2023
-# How to capture changed data with schema evolution from Azure SQL DB to Delta sink using a Change Data Capture (CDC) resource
+# Capture changed data with schema evolution from Azure SQL Database to a Delta sink by using a change data capture resource
+ [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this tutorial, you will use the Azure Data Factory user interface (UI) to create a new Change Data Capture (CDC) resource that picks up changed data from an Azure SQL Database source to Delta Lake stored in Azure Data Lake Storage (ADLS) Gen2 in real-time showcasing the support of schema evolution. The configuration pattern in this tutorial can be modified and expanded upon.
+In this article, you use the Azure Data Factory user interface to create a change data capture (CDC) resource. The resource picks up changed data from an Azure SQL Database source and adds it to Delta Lake stored in Azure Data Lake Storage Gen2, in real time. This activity showcases the support of schema evolution by using a CDC resource between source and sink.
+
+In this article, you learn how to:
+
+* Create a CDC resource.
+* Make dynamic schema changes to a source table.
+* Validate schema changes at the target Delta sink.
-In this tutorial, you follow these steps:
-* Create a Change Data Capture resource.
-* Make dynamic schema changes to source table.
-* Validate schema changes at target Delta sink.
+You can modify and expand the configuration pattern in this article.
## Prerequisites
-* **Azure subscription.** If you don't have an Azure subscription, create a free Azure account before you begin.
-* **Azure SQL Database.** You use Azure SQL DB as a source data store. If you donΓÇÖt have an Azure SQL DB, create one in the Azure portal first before continuing the tutorial.
-* **Azure storage account.** You use delta lake stored in ADLS Gen 2 storage as a target data store. If you don't have a storage account, see Create an Azure storage account for steps to create one.
+Before you begin the procedures in this article, make sure that you have these resources:
-## Create a change data capture artifact
+* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free).
+* **SQL database**. You use Azure SQL Database as a source data store. If you don't have a SQL database, create one in the Azure portal.
+* **Storage account**. You use Delta Lake stored in Azure Data Lake Storage Gen2 as a target data store. If you don't have a storage account, see [Create a storage account](/azure/storage/common/storage-account-create) for the steps to create one.
+## Create a CDC artifact
-1. Navigate to the **Author** blade in your data factory. You see a new top-level artifact below **Pipelines** called **Change Data Capture (preview)**.
+1. Go to the **Author** pane in your data factory. Below **Pipelines**, a new top-level artifact called **Change Data Capture (preview)** appears.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-100.png" alt-text="Screenshot of new top level artifact shown under Factory resources panel." lightbox="media/adf-cdc/change-data-capture-resource-100.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-100.png" alt-text="Screenshot of a new top-level artifact for change data capture on the Factory Resources pane." lightbox="media/adf-cdc/change-data-capture-resource-100.png":::
-2. To create a new **Change Data Capture**, hover over **Change Data Capture (preview)** until you see 3 dots appear. Select on the **Change Data Capture (preview) Actions**.
+1. Hover over **Change Data Capture (preview)** until three dots appear. Then select **Change Data Capture (preview) Actions**.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-101.png" alt-text="Screenshot of Change Data Capture (preview) Actions after hovering on the new top-level artifact." lightbox="media/adf-cdc/change-data-capture-resource-101.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-101.png" alt-text="Screenshot of the button for change data capture actions appearing over the new top-level artifact." lightbox="media/adf-cdc/change-data-capture-resource-101.png":::
-3. Select **New CDC (preview)**. This opens a flyout to begin the guided process.
+1. Select **New CDC (preview)**. This step opens a flyout to begin the guided process.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-102.png" alt-text="Screenshot of a list of Change Data Capture actions." lightbox="media/adf-cdc/change-data-capture-resource-102.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-102.png" alt-text="Screenshot of a list of change data capture actions." lightbox="media/adf-cdc/change-data-capture-resource-102.png":::
-4. You are prompted to name your CDC resource. By default, the name is set to ΓÇ£adfcdcΓÇ¥ and continue to increment up by 1. You can replace this default name with your own.
+1. You're prompted to name your CDC resource. By default, the name is "adfcdc" with a number that increments by 1. You can replace this default name with a name that you choose.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-103.png" alt-text="Screenshot of the text box to update the name of the resource.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-103.png" alt-text="Screenshot of the text box to update the name of a resource.":::
-5. Use the drop-down selection list to choose your data source. For this tutorial, we use **Azure SQL Database**.
+1. Use the dropdown list to choose your data source. For this article, select **Azure SQL Database**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-104.png" alt-text="Screenshot of the guided process flyout with source options in a dropdown list.":::
+
+1. You're prompted to select a linked service. Create a new linked service or select an existing one.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-105.png" alt-text="Screenshot of the box to choose or create a linked service.":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-104.png" alt-text="Screenshot of the guided process flyout with source options in a drop-down selection menu.":::
+1. After you select a linked service, you're prompted to select source tables. Use the checkboxes to select the source tables, and then select the **Incremental column** value by using the dropdown list.
-6. You will then be prompted to select a linked service. Create a new linked service or select an existing one.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-106.png" alt-text="Screenshot that shows selection of a source table and an incremental column.":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-105.png" alt-text="Screenshot of the selection box to choose or create a new linked service.":::
+ The pane lists only tables that have supported incremental column data types.
-7. Once the linked service is selected, you will be prompted for selection of the source table. Use the checkbox to select the source table(s) then select the **Incremental column** using the drop-down selection.
+ > [!NOTE]
+ > To enable CDC with schema evolution in an Azure SQL Database source, choose tables based on watermark columns rather than tables that are native SQL CDC enabled.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-106.png" alt-text="Screenshot of the selection box to choose source table(s) and selection of incremental column.":::
+1. After you select the source tables, select **Continue** to set your data target.
-> [!NOTE]
-> Only table(s) with supported incremental column data types are listed here.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-107.png" alt-text="Screenshot of the Continue button in the guided process to select a data target.":::
-> [!NOTE]
-> To enable Change Data Capture (CDC) with schema evolution in SQL Azure Database source, we should choose watermark column-based tables rather than native SQL CDC enabled tables.
+1. Select a **Target type** value by using the dropdown list. For this article, select **Delta**.
-8. Once youΓÇÖve selected the source table(s), select **Continue** to set your data target.
-
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-107.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-108.png" alt-text="Screenshot of a dropdown menu of all data target types.":::
-9. Then, select a **Target type** using the drop-down selection. For this tutorial, we select **Delta**.
+1. You're prompted to select a linked service. Create a new linked service or select an existing one.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-108.png" alt-text="Screenshot of a drop-down selection menu of all data target types.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-109.png" alt-text="Screenshot of the box to choose or create a linked service to your data target.":::
-10. You are prompted to select a linked service. Create a new linked service or select an existing one.
-
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-109.png" alt-text="Screenshot of the selection box to choose or create a new linked service to your data target.":::
+1. Select your target data folder. You can use either:
-11. Use the **Browse** button to select your target data folder.
-
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-110.png" alt-text="Screenshot of a folder icon to browse for a folder path.":::
+ * The **Browse** button under **Target base path**, which helps you automatically populate the browse path for all the new tables selected for a source.
+ * The **Browse** button outside to individually select the folder path.
-> [!NOTE]
-> You can either use **Browse** button under Target base path which helps you to auto-populate the browse path for all the new table(s) selected for source (or) use **Browse** button outside to individually select the folder path.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-110.png" alt-text="Screenshot of a folder icon to browse for a folder path.":::
-12. Once youΓÇÖve selected a folder path, select **Continue** button.
+1. After you select a folder path, select the **Continue** button.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-111.png" alt-text="Screenshot of the continue button in the guided process to proceed to next step.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-111.png" alt-text="Screenshot of the Continue button in the guided process to proceed to the next step.":::
-13. You automatically land in a new change data capture tab, where you can configure your new resource.
+1. A new tab for capturing change data appears. This tab is the CDC studio, where you can configure your new resource.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-112.png" alt-text="Screenshot of the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-112.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-112.png" alt-text="Screenshot of the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-112.png":::
-14. A new mapping will automatically be created for you. You can update the **Source** and **Target** selections for your mapping by using the drop-down selection lists.
+ A new mapping is automatically created for you. You can update the **Source Table** and **Target Table** selections for your mapping by using the dropdown lists.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-113.png" alt-text="Screenshot of the source to target mapping in the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-113.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-113.png" alt-text="Screenshot of the source-to-target mapping in the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-113.png":::
-15. Once youΓÇÖve selected your tables, you should see that their columns are auto mapped by default with the **Auto map** toggle on. Auto map automatically maps the columns by name in the sink, picks up new column changes when source schema evolves and flows this to the supported sink types.
+1. After you select your tables, their columns are mapped by default with the **Auto map** toggle turned on. **Auto map** automatically maps the columns by name in the sink, picks up new column changes when the source schema evolves, and flows this information to the supported sink types.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-114.png" alt-text="Screenshot of default Auto map toggle set to on." lightbox="media/adf-cdc/change-data-capture-resource-114.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-114.png" alt-text="Screenshot of the toggle for automatic mapping turned on." lightbox="media/adf-cdc/change-data-capture-resource-114.png":::
-> [!NOTE]
-> Schema evolution works with Auto map toggle set to on only. If you want to know how to edit column mappings or include transformations, please refer [Capture changed data with a change data capture resource](how-to-change-data-capture-resource.md)
+ > [!NOTE]
+ > Schema evolution works only when the **Auto map** toggle is turned on. To learn how to edit column mappings or include transformations, see [Capture changed data with a change data capture resource](how-to-change-data-capture-resource.md).
-16. You can click the **Keys** link and select the Keys column to be used for tracking the delete operations.
+1. Select the **Keys** link, and then select the **Keys** column to be used for tracking the delete operations.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-115.png" alt-text="Screenshot of Keys link to enable Keys column selection." lightbox="media/adf-cdc/change-data-capture-resource-115.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-115.png" alt-text="Screenshot of the link to enable Keys column selection." lightbox="media/adf-cdc/change-data-capture-resource-115.png":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-116.png" alt-text="Screenshot of selecting a Keys column for the selected source.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-116.png" alt-text="Screenshot of selecting a Keys column for the selected source.":::
-17. Once your mappings are complete, set your CDC latency using the **Set Latency** button.
+1. After your mappings are complete, set your CDC latency by using the **Set Latency** button.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-117.png" alt-text="Screenshot of the set frequency button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-117.png":::
-
-18. Select the latency of your CDC and select **Apply** to make the changes. By default, it is set to **15 minutes**. For this tutorial, we select the **Real-time** latency. Real-time latency will continuously keep picking up changes in your source data in a less than 1-minute interval.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-117.png" alt-text="Screenshot of the Set Latency button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-117.png":::
- For other latencies, say if you select 15 minutes, every 15 minutes, your change data capture will process your source data and pick up any changed data since the last processed time.
+1. Select the latency of your CDC, and then select **Apply** to make the changes.
+ By default, latency is set to **15 minute**. The example in this article uses the **Real-time** option for latency. Real-time latency continuously picks up changes in your source data in intervals of less than 1 minute.
-19. Once everything has been finalized, select the **Publish All** to publish your changes.
+ For other latencies (for example, if you select 15 minutes), your change data capture will process your source data and pick up any changed data since the last processed time.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-118.png" alt-text="Screenshot of the options for setting latency.":::
-> [!NOTE]
-> If you do not publish your changes, you will not be able to start your CDC resource. The start button will be greyed out.
+1. After you finish configuring your CDC, select **Publish all** to publish your changes.
-20. Select **Start** to start running your **Change Data Capture**.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-119.png" alt-text="Screenshot of the publish button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-119.png":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-120.png" alt-text="Screenshot of the start button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-120.png":::
+ > [!NOTE]
+ > If you don't publish your changes, you won't be able to start your CDC resource. The **Start** button in the next step will be unavailable.
-21. Using monitoring page, you can see how many changes (insert/update/delete) were read and written and other diagnostic information.
+1. Select **Start** to start running your change data capture.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-120.png" alt-text="Screenshot of the Start button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-120.png":::
+
+Now that your change data capture is running, you can:
+
+* Use the monitoring page to see how many changes (insert, update, or delete) were read and written, along with other diagnostic information.
:::image type="content" source="media/adf-cdc/change-data-capture-resource-121.png" alt-text="Screenshot of the monitoring page of a selected change data capture." lightbox="media/adf-cdc/change-data-capture-resource-121.png":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-122.png" alt-text="Screenshot of the monitoring page of a selected change data capture with detailed view." lightbox="media/adf-cdc/change-data-capture-resource-122.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-122.png" alt-text="Screenshot of the monitoring page of a selected change data capture with a detailed view." lightbox="media/adf-cdc/change-data-capture-resource-122.png":::
-22. You can validate that the change data has landed onto the Delta Lake stored in Azure Data Lake Storage (ADLS) Gen2 in delta format
+* Validate that the change data arrived in Delta Lake stored in Azure Data Lake Storage Gen2, in Delta format.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-123.png" alt-text="Screenshot of the target delta folder." lightbox="media/adf-cdc/change-data-capture-resource-123.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-123.png" alt-text="Screenshot of a target Delta folder." lightbox="media/adf-cdc/change-data-capture-resource-123.png":::
-23. You can validate schema of the change data that has landed.
+* Validate the schema of the change data that arrived.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-124.png" alt-text="Screenshot of actual delta file." lightbox="media/adf-cdc/change-data-capture-resource-124.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-124.png" alt-text="Screenshot of a Delta file." lightbox="media/adf-cdc/change-data-capture-resource-124.png":::
-## Make dynamic schema changes at source
+## Make dynamic schema-level changes to the source tables
-1. Now you can proceed to make schema level changes to the source tables. For this tutorial, we will use the Alter table T-SQL to add a new column "PersonalEmail" to the source table.
+1. Add a new **PersonalEmail** column to the source table by using an `ALTER TABLE` T-SQL statement, as shown in the following example.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-125.png" alt-text="Screenshot of Alter command in Azure Data Studio.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-125.png" alt-text="Screenshot of the ALTER command in Azure Data Studio.":::
-2. You can validate that the new column "PersonalEmail" has been added to the existing table.
+1. Validate that the new **PersonalEmail** column appears in the existing table.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-126.png" alt-text="Screenshot of the new table design.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-126.png" alt-text="Screenshot of a new table design with a column added for personal email.":::
-## Validate schema changes at target Delta
+## Validate schema changes at the Delta sink
-1. Validate change data with schema changes have landed at the Delta sink. For this tutorial, you can see the new column "PersonalEmail" has been added to the sink.
+Confirm that the new column **PersonalEmail** appears in the Delta sink. You now know that change data with schema changes arrived at the target.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-128.png" alt-text="Screenshot of actual Delta file with schema change." lightbox="media/adf-cdc/change-data-capture-resource-128.png":::
## Next steps-- [Learn more about the change data capture resource](concepts-change-data-capture-resource.md)
-
+
+* [Learn more about the CDC resource](concepts-change-data-capture-resource.md)
data-factory How To Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource.md
Title: Capture changed data with a change data capture resource
-description: This tutorial provides step-by-step instructions on how to capture changed data from ADLS Gen2 to Azure SQL DB using a Change data capture resource.
+ Title: Capture changed data by using a change data capture resource
+description: Get step-by-step instructions on how to capture changed data from Azure Data Lake Storage Gen2 to Azure SQL Database by using a change data capture (CDC) resource.
Last updated 06/06/2023
-# How to capture changed data from ADLS Gen2 to Azure SQL DB using a Change Data Capture (CDC) resource
+# Capture changed data from Azure Data Lake Storage Gen2 to Azure SQL Database by using a change data capture resource
+ [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this tutorial, you use the Azure Data Factory user interface (UI) to create a new Change Data Capture (CDC) resource that picks up changed data from an Azure Data Lake Storage (ADLS) Gen2 source to an Azure SQL Database in real-time. The configuration pattern in this tutorial can be modified and expanded upon.
+In this article, you use the Azure Data Factory user interface to create a change data capture (CDC) resource. The resource picks up changed data from an Azure Data Lake Storage Gen2 source and adds it to Azure SQL Database in real time.
+
+In this article, you learn how to:
+
+* Create a CDC resource.
+* Monitor CDC activity.
-In this tutorial, you follow these steps:
-* Create a Change Data Capture resource.
-* Monitor Change Data Capture activity.
+You can modify and expand the configuration pattern in this article.
## Prerequisites
-* **Azure subscription.** If you don't have an Azure subscription, create a free Azure account before you begin.
-* **Azure storage account.** You use ADLS storage as a source data store. If you don't have a storage account, see Create an Azure storage account for steps to create one.
-* **Azure SQL Database.** You use Azure SQL DB as a target data store. If you donΓÇÖt have an Azure SQL DB, create one in the Azure portal first before continuing the tutorial.
+Before you begin the procedures in this article, make sure that you have these resources:
+* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free).
+* **SQL database**. You use Azure SQL Database as a source data store. If you don't have a SQL database, create one in the Azure portal.
+* **Storage account**. You use Delta Lake stored in Azure Data Lake Storage Gen2 as a target data store. If you don't have a storage account, see [Create a storage account](/azure/storage/common/storage-account-create) for the steps to create one.
-## Create a change data capture artifact
+## Create a CDC artifact
-1. Navigate to the **Author** blade in your data factory. You see a new top-level artifact below **Pipelines** called **Change Data Capture (preview)**.
+1. Go to the **Author** pane in your data factory. Below **Pipelines**, a new top-level artifact called **Change Data Capture (preview)** appears.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-61.png" alt-text="Screenshot of new top level artifact shown under Factory resources panel." lightbox="media/adf-cdc/change-data-capture-resource-61.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-61.png" alt-text="Screenshot of a new top-level artifact for change data capture on the Factory Resources pane." lightbox="media/adf-cdc/change-data-capture-resource-61.png":::
-2. To create a new **Change Data Capture**, hover over **Change Data Capture (preview)** until you see 3 dots appear. Select on the **Change Data Capture (preview) Actions**.
+1. Hover over **Change Data Capture (preview)** until three dots appear. Then select **Change Data Capture (preview) Actions**.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-62.png" alt-text="Screenshot of Change Data Capture (preview) Actions after hovering on the new top-level artifact." lightbox="media/adf-cdc/change-data-capture-resource-62.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-62.png" alt-text="Screenshot of the button for change data capture actions appearing over the new top-level artifact." lightbox="media/adf-cdc/change-data-capture-resource-62.png":::
-3. Select **New CDC (preview)**. This opens a flyout to begin the guided process.
+1. Select **New CDC (preview)**. This step opens a flyout to begin the guided process.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-63.png" alt-text="Screenshot of a list of Change Data Capture actions." lightbox="media/adf-cdc/change-data-capture-resource-63.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-63.png" alt-text="Screenshot of a list of change data capture actions." lightbox="media/adf-cdc/change-data-capture-resource-63.png":::
-4. You are prompted to name your CDC resource. By default, the name is set to ΓÇ£adfcdcΓÇ¥ and continue to increment up by 1. You can replace this default name with your own.
+1. You're prompted to name your CDC resource. By default, the name is "adfcdc" with a number that increments by 1. You can replace this default name with a name that you choose.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-64.png" alt-text="Screenshot of the text box to update the name of the resource.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-64.png" alt-text="Screenshot of the text box to update the name of a resource.":::
-5. Use the drop-down selection list to choose your data source. For this tutorial, we use **DelimitedText**.
+1. Use the dropdown list to choose your data source. For this article, select **DelimitedText**.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-65.png" alt-text="Screenshot of the guided process flyout with source options in a drop-down selection menu.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-65.png" alt-text="Screenshot of the guided process flyout with source options in a dropdown list.":::
-6. You will then be prompted to select a linked service. Create a new linked service or select an existing one.
+1. You're prompted to select a linked service. Create a new linked service or select an existing one.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-93.png" alt-text="Screenshot of the selection box to choose or create a new linked service.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-93.png" alt-text="Screenshot of the box to choose or create a linked service.":::
-7. Use the **Source settings** to optionally set advanced source configurations which includes selection of column or row delimiter and many more source settings.
+1. Use the **Source settings** area to optionally set advanced source configurations, including column and row delimiters.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-94.png" alt-text="Screenshot of Advanced Source settings to set delimiters":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-94.png" alt-text="Screenshot of advanced source settings to set delimiters.":::
-> [!NOTE]
-> You can choose to manually edit these source settings, but if you donΓÇÖt, they will be set to the defaults.
+ If you don't manually edit these source settings, they're set to the defaults.
-8. Use the **Browse** button to select your source data folder.
+1. Use the **Browse** button to select your source data folder.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-95.png" alt-text="Screenshot of a folder icon to browse for a folder path.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-95.png" alt-text="Screenshot of a folder icon to browse for a folder path.":::
-9. Once youΓÇÖve selected a folder path, select **Continue** to set your data target.
+1. After you select a folder path, select **Continue** to set your data target.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-96.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-96.png" alt-text="Screenshot of the Continue button in the guided process to select data targets.":::
-> [!NOTE]
-> You can choose to add multiple source folders with the **+** button. The other sources must also use the same linked service that youΓÇÖve already selected.
+ You can choose to add multiple source folders by using the plus (**+**) button. The other sources must also use the same linked service that you already selected.
-10. Then, select a **Target type** using the drop-down selection. For this tutorial, we select **Azure SQL Database**.
+1. Select a **Target type** value by using the dropdown list. For this article, select **Azure SQL Database**.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-69.png" alt-text="Screenshot of a drop-down selection menu of all data target types.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-69.png" alt-text="Screenshot of a dropdown menu of all data target types.":::
-11. You are prompted to select a linked service. Create a new linked service or select an existing one.
-
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-70.png" alt-text="Screenshot of the selection box to choose or create a new linked service to your data target.":::
-
-12. Create new **Target table(s)** or select an existing **Target table(s)**. Under **Existing entities** use the checkbox to select an existing Target table(s) or Under **New entities** select **Edit new tables** to create new Target table(s). The **Preview** button allows you to view your table data.
+1. You're prompted to select a linked service. Create a new linked service or select an existing one.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-71.png" alt-text="Screenshot of the existing entities to choose tables for your target.":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-70.png" alt-text="Screenshot of the box to choose or create a linked service to your data target.":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-72.png" alt-text="Screenshot of the new entities tab to create new tables for your target.":::
-
-> [!NOTE]
-> If there are existing table(s) at the Target with matching name(s), they will be selected by default under **Existing entities**. If not, new tables with matching name(s) are created under **New entities**. Additionally, you can edit new tables with **Edit new tables** button.
+1. For **Target tables**, you can create a new target table or select an existing one:
+
+ * To create a target table, select the **New entities** tab, and then select **Edit new tables**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-72.png" alt-text="Screenshot of the tab to create new tables for your target.":::
+
+ * To select an existing table, select the **Existing entities** tab, and then use the checkbox to choose a table. Use the **Preview** button to view your table data.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-71.png" alt-text="Screenshot of the tab to choose tables for your target.":::
+
+ If existing tables at the target have matching names, they're selected by default under **Existing entities**. If not, new tables with matching names are created under **New entities**. Additionally, you can edit new tables by using the **Edit new tables** button.
+
+1. You can use the checkboxes to choose multiple target tables from your SQL database. After you finish choosing target tables, select **Continue**.
-13. Select **Continue** when you have finalized your selection(s).
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-73.png" alt-text="Screenshot of the Continue button in the guided process to proceed to the next step.":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-73.png" alt-text="Screenshot of the continue button in the guided process to proceed to the next step.":::
+1. A new tab for capturing change data appears. This tab is the CDC studio, where you can configure your new resource.
-> [!NOTE]
-> You can choose multiple target tables from your Azure SQL DB. Use the check boxes to select all targets.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-74.png" alt-text="Screenshot of the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-74.png":::
-14. You automatically land in a new change data capture tab, where you can configure your new resource.
+ A new mapping is automatically created for you. You can update the **Source Table** and **Target Table** selections for your mapping by using the dropdown lists.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-74.png" alt-text="Screenshot of the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-74.png":::
-
-15. A new mapping will automatically be created for you. You can update the **Source** and **Target** selections for your mapping by using the drop-down selection lists.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-75.png" alt-text="Screenshot of the source-to-target mapping in the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-75.png":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-75.png" alt-text="Screenshot of the source to target mapping in the change data capture studio." lightbox="media/adf-cdc/change-data-capture-resource-75.png":::
+1. After you select your tables, their columns are mapped by default with the **Auto map** toggle turned on. **Auto map** automatically maps the columns by name in the sink, picks up new column changes when the source schema evolves, and flows this information to the supported sink types.
-16. Once youΓÇÖve selected your tables, you should see that their columns are auto mapped by default with the **Auto map** toggle on. Auto map automatically maps the columns by name in the sink, picks up new column changes when source schema evolves and flows this to the supported sink types. If you would want to retain Auto map and not change any column mappings, proceed to **Step 19** directly.
+ If you want to use **Auto map** and not change any column mappings, go directly to step 18.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-76.png" alt-text="Screenshot of default Auto map toggle set to on." lightbox="media/adf-cdc/change-data-capture-resource-76.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-76.png" alt-text="Screenshot of the toggle for automatic mapping turned on." lightbox="media/adf-cdc/change-data-capture-resource-76.png":::
-17. If you would want to enable the column mapping(s), select the mapping(s) and switch the Auto map toggle off, and then select the Column mappings button to view the column mappings.
+ If you want to enable the column mappings, select the mappings and turn off the **Auto map** toggle. Then, select the **Column mappings** button to view the mappings.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-77.png" alt-text="Screenshot of mapping selection, Auto map toggle set to off and column mapping button." lightbox="media/adf-cdc/change-data-capture-resource-77.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-77.png" alt-text="Screenshot of mapping selection, the toggle for automatic mapping turned off, and the button for column mappings." lightbox="media/adf-cdc/change-data-capture-resource-77.png":::
-> [!NOTE]
-> You can switch back to the default Auto mapping anytime by switching the **Auto map** toggle on.
+ You can switch back to automatic mapping anytime by turning on the **Auto map** toggle.
-18. Here you can view your column mappings. Use the drop-down lists to edit your column mappings for Mapping method, Source column, and Target column.
+1. View your column mappings. Use the dropdown lists to edit your column mappings for **Mapping method**, **Source column**, and **Target column**.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-78.png" alt-text="Screenshot of the column mapping page to allow users to editing column mappings." lightbox="media/adf-cdc/change-data-capture-resource-78.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-78.png" alt-text="Screenshot of the page for editing column mappings." lightbox="media/adf-cdc/change-data-capture-resource-78.png":::
-You can add more column mappings using the **New mapping** button. Use the drop-down lists to select the **Mapping method**, **Source column**, and **Target** column. Also, if you want to track the delete operation for supported sink types, you can select the **Keys** column. You can select **Data Preview - Refresh** button to visualize how the data looks at the target.
+ From this page, you can:
+
+ * Add more column mappings by using the **New mapping** button. Use the dropdown lists to make selections for **Mapping method**, **Source column**, and **Target column**.
+ * Select the **Keys** column if you want to track the delete operation for supported sink types.
+ * Select the **Refresh** button under **Data preview** to visualize how the data looks at the target.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-79.png" alt-text="Screenshot of the Add new mapping icon to add new column mappings, drop down with mapping methods, select Keys column and Data preview refresh button for allowing users to visualize data at target." lightbox="media/adf-cdc/change-data-capture-resource-79.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-79.png" alt-text="Screenshot of the button for adding column mappings, the dropdown list for mapping methods, the Keys column, and the Refresh button." lightbox="media/adf-cdc/change-data-capture-resource-79.png":::
-19. When your mapping is complete, select the back arrow to return to the main CDC canvas.
+1. When your mapping is complete, select the arrow button to return to the main CDC canvas.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-80.png" alt-text="Screenshot of back button to go back to table mapping page." lightbox="media/adf-cdc/change-data-capture-resource-80.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-80.png" alt-text="Screenshot of the button to go back to the table mapping page." lightbox="media/adf-cdc/change-data-capture-resource-80.png":::
-20. You can add more source to target mappings in one CDC artifact. Use the Edit button to add more data sources and targets. Then, select **New mapping** and use the drop-down lists to set a new source and target mapping. Also Auto map can be set on or off for each of these mappings independently.
+1. You can add more source-to-target mappings in one CDC artifact. Use the **Edit** button to add more data sources and targets. Then, select **New mapping** and use the drop-down lists to set a new source and target. You can turn **Auto map** on or off for each of these mappings independently.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-81.png" alt-text="Screenshot of the edit button to add new sources and new mapping button to set a new source to target mapping." lightbox="media/adf-cdc/change-data-capture-resource-81.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-81.png" alt-text="Screenshot of the button to add new sources and the button to set a new source-to-target mapping." lightbox="media/adf-cdc/change-data-capture-resource-81.png":::
-21. Once your mappings are complete, set your CDC latency using the **Set Latency** button.
+1. After your mappings are complete, set your CDC latency by using the **Set Latency** button.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-82.png" alt-text="Screenshot of the set frequency button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-82.png":::
-
-22. Select the latency of your CDC and select **Apply** to make the changes. By default, it is set to **15 minutes**. For this tutorial, we select the **Real-time** latency. Real-time latency will continuously keep picking up changes in your source data in a less than 1-minute interval.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-82.png" alt-text="Screenshot of the Set Latency button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-82.png":::
+
+1. Select the latency of your CDC, and then select **Apply** to make the changes.
- For other latencies, say if you select 15 minutes, every 15 minutes, your change data capture will process your source data and pick up any changed data since the last processed time.
+ By default, latency is set to **15 minute**. The example in this article uses the **Real-time** option for latency. Real-time latency continuously picks up changes in your source data in intervals of less than 1 minute.
+ For other latencies (for example, if you select 15 minutes), your change data capture will process your source data and pick up any changed data since the last processed time.
-> [!NOTE]
-> Support for **streaming data integration** (EventHub & Kafka data sources) is coming soon. When available the latency will be set to Real-time by default.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-83.png" alt-text="Screenshot of the options for setting latency.":::
-23. Once everything has been finalized, select the **Publish All** to publish your changes.
+ > [!NOTE]
+ > If support is extended to streaming data integration (Azure Event Hubs and Kafka data sources), the latency will be set to **Real-time** by default.
+1. After you finish configuring your CDC, select **Publish all** to publish your changes.
-> [!NOTE]
-> If you do not publish your changes, you will not be able to start your CDC resource. The start button will be greyed out.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-84.png" alt-text="Screenshot of the publish button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-84.png":::
-24. Select **Start** to start running your **Change Data Capture**.
+ > [!NOTE]
+ > If you don't publish your changes, you won't be able to start your CDC resource. The **Start** button in the next step will be unavailable.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-85.png" alt-text="Screenshot of the start button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-85.png":::
-
+1. Select **Start** to start running your change data capture.
-## Monitor your Change data capture
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-85.png" alt-text="Screenshot of the Start button at the top of the canvas." lightbox="media/adf-cdc/change-data-capture-resource-85.png":::
-1. To monitor your change data capture, navigate to the **Monitor** blade or select the monitoring icon from the CDC designer.
+## Monitor your change data capture
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-86.png" alt-text="Screenshot of the monitoring blade.":::
-
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-87.png" alt-text="Screenshot of the monitoring button at the top of the CDC canvas." lightbox="media/adf-cdc/change-data-capture-resource-87.png":::
+1. Open the **Monitor** pane by using either of these methods:
-2. Select **Change Data Capture (preview)** to view your CDC resources.
+ * Select **Monitor** in the Azure portal.
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-88.png" alt-text="Screenshot of the Change Data Capture monitoring section.":::
-
-3. Here you can see the **Source**, **Target**, **Status**, and **Last processed** time of your change data capture.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-86.png" alt-text="Screenshot of the Monitor button in the Azure portal.":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-89.png" alt-text="Screenshot of an overview of the change data capture monitoring page." lightbox="media/adf-cdc/change-data-capture-resource-89.png":::
+ * Select the monitoring icon from the CDC designer.
-4. Select the name of your CDC to see more details. You can see how many changes (insert/update/delete) were read and written and other diagnostic information.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-87.png" alt-text="Screenshot of the monitoring icon at the top of the CDC canvas." lightbox="media/adf-cdc/change-data-capture-resource-87.png":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-90.png" alt-text="Screenshot of the detailed monitoring of a selected change data capture." lightbox="media/adf-cdc/change-data-capture-resource-90.png":::
+1. Select **Change Data Capture (preview)** to view your CDC resources.
-> [!NOTE]
-> If you have multiple mappings set up in your Change data capture, each mapping will show as a different color. Click on the bar to see specific details for each mapping or use the Diagnostics at the bottom of the screen.
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-88.png" alt-text="Screenshot of the Change Data Capture button.":::
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-91.png" alt-text="Screenshot of the detailed monitoring page of a change data capture with multiple sources to target mappings." lightbox="media/adf-cdc/change-data-capture-resource-91.png":::
-
- :::image type="content" source="media/adf-cdc/change-data-capture-resource-92.png" alt-text="Screenshot of a detailed breakdown of each mapping in the change data capture artifact." lightbox="media/adf-cdc/change-data-capture-resource-92.png":::
+ The **Change Data Capture** pane shows the **Source**, **Target**, **Status**, and **Last processed** information for your change data capture.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-89.png" alt-text="Screenshot of an overview of the change data capture monitoring page." lightbox="media/adf-cdc/change-data-capture-resource-89.png":::
+
+1. Select the name of your CDC to see more details. You can see how many changes (insert, update, or delete) were read and written, along with other diagnostic information.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-90.png" alt-text="Screenshot of the detailed monitoring of a selected change data capture." lightbox="media/adf-cdc/change-data-capture-resource-90.png":::
+
+ If you set up multiple mappings in your change data capture, each mapping appears as a different color. Select the bar to see specific details for each mapping, or use the diagnostics information at the bottom of the pane.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-91.png" alt-text="Screenshot of the detailed monitoring information for a change data capture with multiple source-to-target mappings." lightbox="media/adf-cdc/change-data-capture-resource-91.png":::
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-92.png" alt-text="Screenshot of a detailed breakdown of each mapping in a change data capture artifact." lightbox="media/adf-cdc/change-data-capture-resource-92.png":::
## Next steps-- [Learn more about the change data capture resource](concepts-change-data-capture-resource.md)+
+* [Learn more about the CDC resource](concepts-change-data-capture-resource.md)
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Similar to the copy, you have the ability to tailor the compute size and TTL dur
:::image type="content" source="./media/managed-vnet/time-to-live-configuration.png" alt-text="Screenshot that shows the TTL configuration.":::
+You can utilize the table below as a reference to determine the optimal number of nodes for executing both Pipelines and external activities.
+
+| Activity Type | Capacity |
+| | |
+| Pipeline activity | Approximately 50 per node <br> Script activity and Lookup activity with SQL alwaysEncrypted tend to consume more resources compared to other pipeline activities, with the suggested number being around 10 per node |
+| External activity | Approximately 800 per node |
+ ### Comparison of different TTL
The column **Using private endpoint** is always shown as blank even if you creat
:::image type="content" source="./media/managed-vnet/akv-pe.png" alt-text="Screenshot that shows a private endpoint for Key Vault.":::
-### Fully Qualified Domain Name ( FQDN ) of Azure HDInsight
+### Fully Qualified Domain Name (FQDN) of Azure HDInsight
If you created a custom private link service, FQDN should end with **azurehdinsight.net** without leading *privatelink* in domain name when you create a private end point. If you use privatelink in domain name, make sure it is valid and you are able to resolve it.
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Understanding Azure Data Factory pricing through examples
data-factory Pricing Examples Copy Transform Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-azure-databricks.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Pricing example: Copy data and transform with Azure Databricks hourly
data-factory Pricing Examples Copy Transform Dynamic Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-dynamic-parameters.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Copy data and transform with dynamic parameters hourly
data-factory Pricing Examples Data Integration Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-data-integration-managed-vnet.md
Last updated 06/12/2023
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage on an hourly schedule for 8 hours per day. We'll calculate the price for 30 days. You'll do this execution twice on different pipelines for each run. The execution time of these two pipelines is overlapping.
+In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage on an hourly schedule for 8 hours per day. We calculate the price for 30 days. You do this execution twice on different pipelines for each run. The execution time of these two pipelines is overlapping.
-The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+The prices used in this example are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
To accomplish the scenario, you need to create two pipelines with the following
| **Operations** | **Types and Units** | | | | | Run Pipeline | 6 Activity runs **per execution** (2 for trigger runs, 4 for activity runs) = 1440, rounded up since the calculator only allows increments of 1000.|
-| Execute Delete Activity: pipeline execution time **per execution** = 7 min. If the Delete Activity execution in the first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC. | Total (7 min + 60 min) / 60 min * 30 monthly executions = 33.5 pipeline activity execution hours in Managed VNET. Pipeline activity supports up to 50 concurrent executions in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity. |
+| Execute Delete Activity: pipeline execution time **per execution** = 7 min. If the Delete Activity execution in the first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC. | Total (7 min + 60 min) / 60 min * 30 monthly executions = 33.5 pipeline activity execution hours in Managed VNET. Pipeline activity supports up to 50 concurrent executions per node in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity. |
| Copy Data Assumption: DIU execution time **per execution** = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC.| [(10 min + 2 min (queue time charges up to 2 minutes)) / 60 min * 4 Azure Managed VNET Integration Runtime (default DIU setting = 4)] * 2 = 1.6 daily data movement activity execution hours in Managed VNET. For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) | ## Pricing calculator example
To accomplish the scenario, you need to create two pipelines with the following
- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md) - [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md) - [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)-- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Get Delta Data From Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-get-delta-data-from-sap-ecc.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows
data-factory Pricing Examples Mapping Data Flow Debug Workday https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-mapping-data-flow-debug-workday.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Pricing example: Using mapping data flow debug for a normal workday
data-factory Pricing Examples S3 To Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-s3-to-blob.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Pricing example: Copy data from AWS S3 to Azure Blob storage hourly
data-factory Pricing Examples Ssis On Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-ssis-on-azure-ssis-integration-runtime.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Pricing example: Run SSIS packages on Azure-SSIS integration runtime
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Pricing example: Transform data in blob store with mapping data flows
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
Previously updated : 08/19/2022 Last updated : 08/10/2023 # Quickstart: Create an Azure Data Factory using Bicep
data-factory Quickstart Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-learn-modules.md
Previously updated : 09/14/2022 Last updated : 08/10/2023
data-factory Solution Template Bulk Copy With Control Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-bulk-copy-with-control-table.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Bulk copy from a database with a control table
data-factory Solution Template Copy Files Multiple Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-files-multiple-containers.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Copy multiple folders with Azure Data Factory
data-factory Solution Template Copy New Files Last Modified Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-new-files-last-modified-date.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Copy new and changed files by LastModifiedDate with Azure Data Factory
data-factory Solution Template Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-databricks-notebook.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transformation with Azure Databricks
data-factory Solution Template Delta Copy With Control Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-delta-copy-with-control-table.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Delta copy from a database with a control table
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Extract data from PDF
data-factory Solution Template Move Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-move-files.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Move files with Azure Data Factory
data-factory Solution Template Synapse Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-synapse-notebook.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Call Synapse pipeline with a notebook activity
data-factory Ssis Azure Connect With Windows Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-connect-with-windows-auth.md
Title: Access data stores and file shares with Windows authentication description: Learn how to configure SSIS catalog in Azure SQL Database and Azure-SSIS Integration Runtime in Azure Data Factory to run packages that access data stores and file shares with Windows authentication. Previously updated : 09/22/2022 Last updated : 08/10/2023
data-factory Ssis Integration Runtime Diagnose Connectivity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Use the diagnose connectivity feature in the SSIS integration runtime
data-factory Ssis Integration Runtime Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-management-troubleshoot.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Troubleshoot SSIS Integration Runtime management
data-factory Ssis Integration Runtime Ssis Activity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Troubleshoot package execution in the SSIS integration runtime
data-factory Store Credentials In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/store-credentials-in-key-vault.md
Previously updated : 09/22/2022 Last updated : 08/10/2023
data-factory Supported File Formats And Compression Codecs Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs-legacy.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Supported file formats and compression codecs in Azure Data Factory and Synapse Analytics (legacy)
data-factory Supported File Formats And Compression Codecs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs.md
Previously updated : 09/22/2022 Last updated : 08/10/2023
data-factory Transform Data Databricks Jar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-jar.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data by running a Jar activity in Azure Databricks
data-factory Transform Data Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-notebook.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data by running a Databricks notebook
data-factory Transform Data Databricks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-python.md
description: Learn how to process or transform data by running a Databricks Pyth
Previously updated : 09/22/2022 Last updated : 08/10/2023
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-machine-learning-service.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Execute Azure Machine Learning pipelines in Azure Data Factory and Synapse Analytics
data-factory Transform Data Using Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-custom-activity.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Use custom activities in an Azure Data Factory or Azure Synapse Analytics pipeline
data-factory Transform Data Using Data Lake Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-data-lake-analytics.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Process data by running U-SQL scripts on Azure Data Lake Analytics with Azure Data Factory and Synapse Analytics
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md
Previously updated : 04/04/2023 Last updated : 08/14/2023 # Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory
In this section, you author a Databricks linked service. This linked service con
1. For **Databrick Workspace URL**, the information should be auto-populated.
- 1. For **Authentication type**, if you select **Access Token**, generate it from Azure Databricks workplace. You can find the steps [here](https://docs.databricks.com/api/latest/authentication.html#generate-token). For **Managed service identity** and **User Assigned Managed Identity**, grant **Contributor role** to both identities in Azure Databricks resource's *Access control* menu.
+ 1. For **Authentication type**, if you select **Access Token**, generate it from Azure Databricks workplace. You can find the steps [here](https://docs.databricks.com/administration-guide/access-control/tokens.html). For **Managed service identity** and **User Assigned Managed Identity**, grant **Contributor role** to both identities in Azure Databricks resource's *Access control* menu.
1. For **Cluster version**, select the version you want to use.
data-factory Transform Data Using Hadoop Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-hive.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data using Hadoop Hive activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Hadoop Map Reduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-map-reduce.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data using Hadoop MapReduce activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Hadoop Pig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-pig.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data using Hadoop Pig activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Hadoop Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-streaming.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data using Hadoop Streaming activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-machine-learning.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Create a predictive pipeline using Machine Learning Studio (classic) with Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-spark.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data using Spark activity in Azure Data Factory and Synapse Analytics
data-factory Transform Data Using Stored Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-stored-procedure.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data by using the SQL Server Stored Procedure activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data.md
Previously updated : 09/22/2022 Last updated : 08/10/2023 # Transform data in Azure Data Factory and Azure Synapse Analytics
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy-portal.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Copy multiple tables in bulk by using Azure Data Factory in the Azure portal
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Copy multiple tables in bulk by using Azure Data Factory using PowerShell
data-factory Tutorial Control Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow.md
Previously updated : 09/28/2022 Last updated : 08/11/2023 # Branching and chaining activities in a Data Factory pipeline
data-factory Tutorial Copy Data Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-dot-net.md
Previously updated : 09/26/2022 Last updated : 08/10/2023
data-factory Tutorial Copy Data Portal Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal-private.md
Previously updated : 09/26/2022 Last updated : 08/10/2023
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal.md
Previously updated : 09/26/2022 Last updated : 08/10/2023
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-tool.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Copy data from Azure Blob storage to a SQL Database by using the Copy Data tool
data-factory Tutorial Data Flow Adventure Works Retail Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-adventure-works-retail-template.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # AdventureWorks template documentation
data-factory Tutorial Data Flow Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-delta-lake.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Transform data in delta lake using mapping data flows
data-factory Tutorial Data Flow Dynamic Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-dynamic-columns.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Dynamically set column names in data flows
data-factory Tutorial Data Flow Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-private.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Transform data securely by using mapping data flow
data-factory Tutorial Data Flow Write To Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-write-to-lake.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Best practices for writing to files to data lake with data flows
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Transform data using mapping data flows
data-factory Tutorial Deploy Ssis Packages Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
ms.devlang: powershell Previously updated : 09/26/2022 Last updated : 08/10/2023
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
Previously updated : 09/26/2022 Last updated : 08/10/2023
data-factory Tutorial Deploy Ssis Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-virtual-network.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Configure Azure-SSIS integration runtime to join a virtual network
data-factory Tutorial Enable Remote Access Intranet Tls Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-enable-remote-access-intranet-tls-ssl-certificate.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Enable remote access from intranet with TLS/SSL certificate (Advanced)
data-factory Tutorial Hybrid Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-data-tool.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Copy data from a SQL Server database to Azure Blob storage by using the Copy Data tool
data-factory Tutorial Hybrid Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-portal.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Copy data from a SQL Server database to Azure Blob storage
data-factory Tutorial Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-powershell.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Tutorial: Copy data from a SQL Server database to Azure Blob storage
data-factory Tutorial Incremental Copy Lastmodified Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-lastmodified-copy-data-tool.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool
data-factory Tutorial Incremental Copy Multiple Tables Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally load data from multiple tables in SQL Server to a database in Azure SQL Database using the Azure portal
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally load data from multiple tables in SQL Server to Azure SQL Database using PowerShell
data-factory Tutorial Incremental Copy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-overview.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally load data from a source data store to a destination data store
data-factory Tutorial Incremental Copy Partitioned File Name Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally copy new files based on time partitioned file name by using the Copy Data tool
data-factory Tutorial Incremental Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-portal.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally load data from Azure SQL Database to Azure Blob storage using the Azure portal
data-factory Tutorial Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Incrementally load data from Azure SQL Database to Azure Blob storage using PowerShell
data-factory Tutorial Managed Virtual Network Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-migrate.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Tutorial: How to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Tutorial: How to access on-premises SQL Server from Data Factory Managed VNet using Private Endpoint
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
Previously updated : 08/02/2023 Last updated : 08/11/2023 # Tutorial: How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint
data-factory Tutorial Operationalize Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-operationalize-pipelines.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Deliver service level agreement for data pipelines
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Push Data Factory lineage data to Microsoft Purview
data-factory Tutorial Transform Data Hive Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Transform data in Azure Virtual Network using Hive activity in Azure Data Factory
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
Previously updated : 09/26/2022 Last updated : 08/10/2023
data-factory Update Machine Learning Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/update-machine-learning-models.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Update Machine Learning Studio (classic) models by using Update Resource activity
data-factory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md
Previously updated : 09/27/2022 Last updated : 08/11/2023 # What's new archive for Azure Data Factory
This archive page retains updates from older months.
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update
+## December 2022
+
+### Data flow
+
+SQL change data capture (CDC) incremental extract - supports numeric columns in mapping dataflow [Learn more](connector-azure-sql-database.md?tabs=data-factory#source-transformation)
+
+### Data movement
+
+Express virtual network injection for SSIS in Azure Data Factory is generally available [Learn more](https://techcommunity.microsoft.com/t5/sql-server-integration-services/general-availability-of-express-virtual-network-injection-for/ba-p/3699993)
+
+### Region expansion
+
+Continued region expansion - Azure Data Factory is now available in China North 3 [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=data-factory)
+ ## November 2022 ### Data flow
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly. For older months' update
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
+## July 2023
+
+### Change Data Capture
+
+Top-level CDC resource now supports schema evolution. [Learn more](how-to-change-data-capture-resource-with-schema-evolution.md)
+
+### Data flow
+
+Merge schema option in delta sink now supports schema evolution in Mapping Data Flows. [Learn more](format-delta.md#delta-sink-optimization-options)
+
+### Data movement
+
+- Comment Out Part of Pipeline with Deactivation. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/comment-out-part-of-pipeline/ba-p/3868069)
+- Pipeline return value is now generally available. [Learn more](tutorial-pipeline-return-value.md)
+
+### Developer productivity
+
+Documentation search now included in the Azure Data Factory search toolbar. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/documentation-search-now-embedded-in-azure-data-factory/ba-p/3873890)
+ ## June 2023 ### Continuous integration and continuous deployment
Express virtual network injection for SSIS now generally available [Learn more](
- Monitor filter updates for faster searches - Directly launch Pipeline Template Gallery through Azure portal [Learn more]()
-## December 2022
-
-### Data flow
-
-SQL change data capture (CDC) incremental extract - supports numeric columns in mapping dataflow [Learn more](connector-azure-sql-database.md?tabs=data-factory#source-transformation)
-
-### Data movement
-
-Express virtual network injection for SSIS in Azure Data Factory is generally available [Learn more](https://techcommunity.microsoft.com/t5/sql-server-integration-services/general-availability-of-express-virtual-network-injection-for/ba-p/3699993)
-
-### Region expansion
-
-Continued region expansion - Azure Data Factory is now available in China North 3 [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=data-factory)
-- ## More information - [What's new archive](whats-new-archive.md)
data-factory Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whitepapers.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Azure Data Factory whitepapers
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-functions.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Transformation functions in Power Query for data wrangling
data-factory Wrangling Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-overview.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # What is data wrangling?
data-factory Wrangling Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-tutorial.md
Previously updated : 09/26/2022 Last updated : 08/10/2023 # Prepare data with data wrangling
data-manager-for-agri Concepts Isv Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md
# What is our Solution Framework?
-In this article, you learn how Azure Data Manager for Agriculture provides a framework for customer to use solutions built by ISV Partners.
+In this article, you learn how Azure Data Manager for Agriculture provides a framework for customer to use solutions built by Bayer and other ISV Partners.
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
The solution framework is built on top of Data Manager for Agriculture that prov
* Carbon Emission Model: An ISV partner can estimate the amount of carbon emitted from the field based upon the imagery, sensors data for a particular farm. * Crop Identification: Use imagery data to identify crop growing in an area of interest.
-The above list has only a few examples but an ISV partner can come with their own specific scenario and build a solution.
+The above list has only a few examples but an ISV partner can come with their own specific scenario and build a solution.
+
+## Bayer AgPowered Services
+
+Additionally, Bayer has built the below Solutions in partnership with Microsoft and can be installed on top of customer's ADMA instance.
+* Growing Degree Days
+* Crop Water Usage Maps
+* Biomass Variability
+
+To install the above Solutions, please refer to [this](./how-to-set-up-isv-solution.md) article.
## Next steps
data-manager-for-agri How To Set Up Isv Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md
Last updated 02/14/2023
-# How do I use an ISV solution?
+# Working with an ISV solution
-Once you've installed an ISV solution from Azure portal, use this document to understand how to make API calls from your application. Integration (request/response) is done through APIs asynchronously.
+Follow the following guidelines to install and use an ISV solution.
+
+## Install an ISV solution
+
+1. Once you've installed an instance of Azure Data Manager for Agriculture from Azure portal, navigate to Settings ->Solutions tab on the left hand side in your instance.
+2. You'll be able to view the list of available Solutions. Select the solution of your choice and click on Add button against it.
+3. You'll be navigated to Azure Marketplace page for the Solution.
+4. Click on Contact Me CTA and the ISV partner will contact you to help with next steps of installation.
+5. To edit an installed Solution, click on the edit icon against the Solution in Solutions page. You'll be redirected to Azure Marketplace from where you can contact the ISV partner by clicking on Contact Me.
+6. To delete an installed Solution, click on the delete icon against the Solution in Solutions page and you'll be redirected to Azure Marketplace from where you can Contact the ISV partner by clicking on Contact Me.
+
+## Use an ISV solution
+
+Once you've installed an ISV solution, use the below steps to understand how to make API calls from your application. Integration (request/response) is done through APIs asynchronously.
A high level view of how you can create a new request and get responses from the ISV partners solution: :::image type="content" source="./media/3p-solutions-new.png" alt-text="Screenshot showing access flow for ISV API.":::
-* Step 1: You make an API call for a PUT request with the required parameters (for example Job ID, Farm details)
- * The Data Manager API receives this request and authenticates it. If the request is invalid, you'll get an error code back.
-* Step 2: If the request is valid, the Data Manager creates a PUT request to ISV Partners solution API.
-* Step 3: The ISV solution then makes a GET request to the weather service in data manager that is required for processing.
-* Step 4: The ISV solution completes the processing the request and submits a response back to the Data Manager.
+1. You make an API call for a PUT request with the required parameters (for example Job ID, Farm details)
+ * The Data Manager API receives this request and authenticates it. If the request is invalid, you get an error code back.
+2. If the request is valid, the Data Manager creates a PUT request to ISV Partners solution API.
+3. The ISV solution then makes a GET request to the weather service in data manager that is required for processing.
+4. The ISV solution completes the processing the request and submits a response back to the Data Manager.
* If there's any error when this request is submitted, then you may have to verify the configuration and parameters. If you're unable to resolve the issue then contact us at madma@microsoft.com
-* Step 5: Now you make a call to Data Manager using the Job ID to get the final response.
+5. Now you make a call to Data Manager using the Job ID to get the final response.
* If the request processing is completed by the ISV Solution, you get the insight response back. * If the request processing is still in progress, you'll get the message ΓÇ£Processing in progressΓÇ¥
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
The triggers for an image scan are:
- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published. - **Re-scan** is performed once a day for: - images pushed in the last 90 days.
- - images currently running on the Kubernetes clusters monitored by Defender for Cloud (either via [agentless discovery and visibility for Kubernetes](how-to-enable-agentless-containers.md) or the [Defender for Containers agent](tutorial-enable-containers-azure.md#deploy-the-defender-profile-in-azure)).
+ - images currently running on the Kubernetes clusters monitored by Defender for Cloud (either via [agentless discovery and visibility for Kubernetes](how-to-enable-agentless-containers.md) or the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure)).
## How does image scanning work?
A detailed description of the scan process is described as follows:
- Once a day, or when an image is pushed to a registry: - All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï
- - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [agentless discovery and visibility within Kubernetes components](/azure/defender-for-cloud/concept-agentless-containers) and [inventory collected via the Defender agents running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-profile)
+ - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [agentless discovery and visibility within Kubernetes components](/azure/defender-for-cloud/concept-agentless-containers) and [inventory collected via the Defender agent running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-agent)
- Vulnerability reports for container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).-- For customers using either [agentless discovery and visibility within Kubernetes components](concept-agentless-containers.md) or [inventory collected via the Defender agents running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-profile), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an AKS cluster.
+- For customers using either [agentless discovery and visibility within Kubernetes components](concept-agentless-containers.md) or [inventory collected via the Defender agent running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-agent), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an AKS cluster.
> [!NOTE] > For Defender for Container Registries (deprecated), images are scanned once on push, and rescanned only once a week.
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
You can simulate alerts for both of the control plane, and workload alerts with
**Prerequisites** - Ensure the Defender for Containers plan is enabled.-- **Arc only** - Ensure the Defender extension is installed.
+- **Arc only** - Ensure the [Defender agent](defender-for-cloud-glossary.md#defender-agent) is installed.
- **EKS or GKE only** - Ensure the default audit log collection autoprovisioning options are enabled. **To simulate a Kubernetes control plane security alert**:
You can simulate alerts for both of the control plane, and workload alerts with
**Prerequisites** - Ensure the Defender for Containers plan is enabled.-- Ensure the Defender profile\extension is installed.
+- Ensure the [Defender agent](defender-for-cloud-glossary.md#defender-agent) is installed.
**To simulate a a Kubernetes workload security alert**:
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Title: Agentless container posture for Microsoft Defender for Cloud
-description: Learn how agentless container posture offers discovery, visibility, and vulnerability assessment for Containers without installing an agent on your machines.
+description: Learn how agentless container posture offers discovery, visibility, and vulnerability assessment for containers without installing an agent on your machines.
Last updated 07/03/2023
Learn more about [CSPM](concept-cloud-security-posture-management.md).
For support and prerequisites for agentless containers posture, see [Support and prerequisites for agentless containers posture](support-agentless-containers-posture.md).
-Agentless container Posture provides the following capabilities:
+Agentless container posture provides the following capabilities:
- [Agentless discovery and visibility](#agentless-discovery-and-visibility-within-kubernetes-components) within Kubernetes components. - [Container registry vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) provides vulnerability assessment for all container images, with near real-time scan of new images and daily refresh of results for maximum visibility to current and emerging vulnerabilities, enriched with exploitability insights, and added to Defender CSPM security graph for contextual risk assessment and calculation of attack paths. - Using Kubernetes [attack path analysis](concept-attack-path.md) to visualize risks and threats to Kubernetes environments. - Using [cloud security explorer](how-to-manage-cloud-security-explorer.md) for risk hunting by querying various risk scenarios, including viewing security insights, such as internet exposure, and other predefined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).
-All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
+All of these capabilities are available as part of the [Defender CSPM](concept-cloud-security-posture-management.md) plan.
## Agentless discovery and visibility within Kubernetes components
The discovery process is based on snapshots taken at intervals:
When you enable the agentless discovery for Kubernetes extension, the following process occurs: -- **Create**: MDC (Microsoft Defender for Cloud) creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator.
+- **Create**: Defender for Cloud creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator.
+- **Assign**: Defender for Cloud assigns a built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope. The role contains the following permissions:
-- **Assign**: MDC assigns 1 built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope.
+ - AKS read (Microsoft.ContainerService/managedClusters/read)
+ - AKS Trusted Access with the following permissions:
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
- The role contains the following permissions:
- - AKS read (Microsoft.ContainerService/managedClusters/read)
- - AKS Trusted Access with the following permissions:
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
+ Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
- Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.--- **Bind**: Upon discovery of an AKS cluster, MDC performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives MDC data plane read permission inside the cluster.-
+- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
+
### What's the refresh interval?
-Agentless information in Defender CSPM is updated through a snapshot mechanism. It can take up to **24 hours** to see results in Cloud Security Explorer and Attack Path.
+Agentless information in Defender CSPM is updated through a snapshot mechanism. It can take up to **24 hours** to see results in attack paths and the cloud security explorer.
## Next steps
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
This glossary provides a brief description of important terms and concepts for the Microsoft Defender for Cloud platform. Select the **Learn more** links to go to related terms in the glossary. This glossary can help you to learn and use the product tools quickly and effectively.
-<a name="glossary-a"></a>
- ## A ### **AAC**+ Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines. See [Adaptive Application Controls](adaptive-application-controls.md).+ ### **AAD**+ Azure Active Directory (Azure AD) is a cloud-based identity and access management service. See [Adaptive Application Controls](../active-directory/fundamentals/active-directory-whatis.md).
-### **ACR Tasks**
+
+### **ACR Tasks**
+ A suite of features within Azure container registry. See [Frequently asked questions - Azure Container Registry](../container-registry/container-registry-faq.yml).+ ### **Adaptive network hardening**+ Adaptive network hardening provides recommendations to further harden the [network security groups (NSG)](../virtual-network/network-security-groups-overview.md) rules. See [What is Adaptive Network Hardening?](../defender-for-cloud/adaptive-network-hardening.md#what-is-adaptive-network-hardening).+ ### **ADO**+ Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications. See [What is Azure DevOps?](/azure/devops/user-guide/what-is-azure-devops)+ ### **AKS**+ Azure Kubernetes Service, Microsoft's managed service for developing, deploying, and managing containerized applications. See [Kubernetes concepts](/azure-stack/aks-hci/kubernetes-concepts).+ ### **Alerts**
-Alerts defend your workloads in real-time so you can react immediately and prevent security events from developing. See [Security alerts and incidents](alerts-overview.md).
-### **ANH**
+
+Alerts defend your workloads in real-time so you can react immediately and prevent security events from developing. See [Security alerts and incidents](alerts-overview.md).
+
+### **ANH**
+ Adaptive network hardening. Learn how to [improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
-### **APT**
+
+### **APT**
+ Advanced Persistent Threats See the [video: Understanding APTs](/events/teched-2012/sia303).+ ### **Arc-enabled Kubernetes**+ Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center. See [What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md).+ ### **ARG**+ Azure Resource Graph-an Azure service designed to extend Azure Resource Management by providing resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. See [Azure Resource Graph Overview](../governance/resource-graph/overview.md).+ ### **ARM**+ Azure Resource Manager-the deployment and management service for Azure. See [Azure Resource Manager overview](../azure-resource-manager/management/overview.md).+ ### **ASB**+ Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure. See [Azure Security Benchmark](/security/benchmark/azure/baselines/security-center-security-baseline).+ ### **Attack Path Analysis**+ A graph-based algorithm that scans the cloud security graph, exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. See [What is attack path analysis?](concept-attack-path.md#what-is-attack-path-analysis).+ ### **Auto-provisioning**
-To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to deploy the Azure Monitor Agent on your servers. Learn how to [configure auto provision](../iot-dps/quick-setup-auto-provision.md).
+
+To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to deploy the Azure Monitor Agent on your servers. Learn how to [configure auto provision](../iot-dps/quick-setup-auto-provision.md).
+
+### Azure Policy for Kubernetes
+
+A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
## B ### **Bicep**+ Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. See [Bicep tutorial](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md).+ ### **Blob storage**+ Azure Blob Storage is the high scale object storage service for Azure and a key building block for data storage in Azure. See [what is Azure blob storage?](../storage/blobs/storage-blobs-introduction.md). ## C
-### **Cacls**
+### **Cacls**
+ Change access control list, Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. See [Access control lists](/windows/win32/secauthz/access-control-lists).
-### **CIS Benchmark**
+
+### **CIS Benchmark**
+ (Kubernetes) Center for Internet Security benchmark. See [CIS](../aks/cis-kubernetes.md).
-### **Cloud security graph**
+
+### **Cloud security graph**
+ The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. See [What is the cloud security graph?](concept-attack-path.md#what-is-cloud-security-graph).+ ### **CORS**+ Cross origin resource sharing, an HTTP feature that enables a web application running under one domain to access resources in another domain. See [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services).+ ### **CNAPP**+ Cloud Native Application Protection Platform. See [Build cloud native applications in Azure](https://azure.microsoft.com/solutions/cloud-native-apps/).+ ### **CNCF**+ Cloud Native Computing Foundation. Learn how to [build CNCF projects by using Azure Kubernetes service](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks).+ ### **CSPM**
-Cloud Security Posture Management. See [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md).
-### **CWPP**
+
+Cloud Security Posture Management. See [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md).
+
+### **CWPP**
+ Cloud Workload Protection Platform. See [CWPP](./overview-page.md). ## D ### Data Aware Security Posture
-Data-aware security posture automatically discovers datastores containing sensitive data, and helps reduce risk of data breaches. Learn about [data-aware security posture](concept-data-security-posture.md).
-### **DDOS Attack**
+
+Data-aware security posture automatically discovers datastores containing sensitive data, and helps reduce risk of data breaches. Learn about [data-aware security posture](concept-data-security-posture.md).
+
+### Defender agent
+
+The DaemonSet that is deployed on each node, collects signals from hosts using eBPF technology, and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Architecture for each Kubernetes environment](defender-for-containers-architecture.md#architecture-for-each-kubernetes-environment).
+
+### **DDOS Attack**
+ Distributed denial-of-service, a type of attack where an attacker sends more requests to an application than the application is capable of handling. See [DDOS FAQs](../ddos-protection/ddos-faq.yml). ## E ### **EASM**+ External Attack Surface Management. See [EASM Overview](how-to-manage-attack-path.md#external-attack-surface-management-easm).+ ### **EDR**+ Endpoint Detection and Response. See [Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).+ ### **EKS**+ Amazon Elastic Kubernetes Service, Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. See[EKS](https://aws.amazon.com/eks/).+ ### **eBPF**+ Extended Berkley Packet Filter [What is eBPF?](https://ebpf.io/) ## F ### **FIM**+ File Integrity Monitoring. Learn about ([file Integrity Monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
-### **FTP**
+
+### **FTP**
+ File Transfer Protocol. Learn how to [Deploy content using FTP](../app-service/deploy-ftp.md). ## G ### **GCP**+ Google Cloud Platform. Learn how to [onboard a GPC Project](../active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md).+ ### **GKE**+ Google Kubernetes Engine, Google's managed environment for deploying, managing, and scaling applications using GCP infrastructure.|[Deploy a Kubernetes workload using GPU sharing on your Azure Stack Edge Pro](../databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md).+ ### **Governance**+ A set of rules and policies adopted by companies that run services in the cloud. The goal of cloud governance is to enhance data security, manage risk, and enable the smooth operation of cloud systems.[Governance Overview](governance-rules.md). ## I
-### **IaaS**
+### **IaaS**
+ Infrastructure as a service, a type of cloud computing service that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis. [What is IaaS?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-iaas/).
-### **IAM**
+
+### **IAM**
+ Identity and Access management. [Introduction to IAM](https://www.microsoft.com/security/business/security-101/what-is-identity-access-management-iam). ## J
-### **JIT**
+### **JIT**
+ Just-in-Time VM access. [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md). ## K ### **Kill Chain**+ The series of steps that describe the progression of a cyberattack from reconnaissance to data exfiltration. Defender for Cloud's supported kill chain intents are based on the MITRE ATT&CK matrix. [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/).+ ### **KQL**+ Kusto Query Language - a tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more. [KQL Overview](/azure/data-explorer/kusto/query/). ## L ### **LSA**+ Local Security Authority. Learn about [secure and use policies on virtual machines in Azure](../virtual-machines/security-policy.md). ## M ### **MCSB**+ Microsoft Cloud Security Benchmark. See [MCSB in Defender for Cloud](concept-regulatory-compliance.md#microsoft-cloud-security-benchmark-in-defender-for-cloud).+ ### **MDC**+ Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. [What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md).+ ### **MDE**+ Microsoft Defender for Endpoint is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).+ ### **MDVM**+ Microsoft Defender Vulnerability Management. Learn how to [enable vulnerability scanning with Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md). ### **MFA**+ Multi-factor authentication, a process in which users are prompted during the sign-in process for an extra form of identification, such as a code on their cellphone or a fingerprint scan.[How it works: Azure Multi Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md).+ ### **MITRE ATT&CK**+ A globally accessible knowledge base of adversary tactics and techniques based on real-world observations. [MITRE ATT&CK](https://attack.mitre.org/).+ ### **MMA**+ Microsoft Monitoring Agent, also known as Log Analytics Agent|[Log Analytics Agent Overview](../azure-monitor/agents/log-analytics-agent.md). ## N ### **NGAV**+ Next Generation Anti-Virus
-### **NIST**
+
+### **NIST**
+ National Institute of Standards and Technology. See [National Institute of Standards and Technology](https://www.nist.gov/).+ ### **NSG**+ Network Security Group. Learn about [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md). ## P ### **PaaS**+ Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. [What is PaaS?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-paas/). ## R ### **RaMP**+ Rapid Modernization Plan, guidance based on initiatives, giving you a set of deployment paths to more quickly implement key layers of protection. Learn about [Zero Trust Rapid Modernization Plan](../security/fundamentals/zero-trust.md).+ ### **RBAC**+ Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. [RBAC Overview](../role-based-access-control/overview.md).
-### **RDP**
+
+### **RDP**
+ Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device. [RDP Bandwidth Requirements](../virtual-desktop/rdp-bandwidth.md).+ ### **Recommendations**
-Recommendations secure your workloads with step-by-step actions that protect your workloads from known security risks. [What are security policies, initiatives, and recommendations?](security-policy-concept.md).
+
+Recommendations secure your workloads with step-by-step actions that protect your workloads from known security risks. [What are security policies, initiatives, and recommendations?](security-policy-concept.md).
+ ### **Regulatory Compliance**+ Regulatory compliance refers to the discipline and process of ensuring that a company follows the laws enforced by governing bodies in their geography or rules required. [Regulatory Compliance Overview](/azure/cloud-adoption-framework/govern/policy-compliance/regulatory-compliance). ## S ### **SAS**+ Shared access signature that provides secure delegated access to resources in your storage account.[Storage SAS Overview](/azure/storage/common/storage-sas-overview).+ ### **SaaS**+ Software as a service (SaaS) allows users to connect to and use cloud-based apps over the Internet. Common examples are email, calendaring, and office tools (such as Microsoft Office 365). SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a cloud service provider.[What is SaaS?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-saas/).+ ### **Secure Score**+ Defender for Cloud continually assesses your cross-cloud resources for security issues. It then aggregates all the findings into a single score that represents your current security situation: the higher the score, the lower the identified risk level. Learn more about [security posture for Microsoft Defender for Cloud](secure-score-security-controls.md).+ ### **Security Alerts**+ Security alerts are the notifications generated by Defender for Cloud and Defender for Cloud plans when threats are identified in your cloud, hybrid, or on-premises environment.[What are security alerts?](../defender-for-cloud/alerts-overview.md#what-are-security-alerts)
-### **Security Initiative**
+
+### **Security Initiative**
+ A collection of Azure Policy Definitions, or rules, that are grouped together towards a specific goal or purpose. [What are security policies, initiatives, and recommendations?](security-policy-concept.md)+ ### **Security Policy**+ An Azure rule about specific security conditions that you want controlled.[Understanding Security Policies](security-policy-concept.md).+ ### **SIEM**+ Security Information and Event Management. [What is SIEM?](https://www.microsoft.com/security/business/security-101/what-is-siem?rtc=1)+ ### **SOAR**+ Security Orchestration Automated Response, a collection of software tools designed to collect data about security threats from multiple sources and respond to low-level security events without human assistance. Learn more about [SOAR](../sentinel/automation.md). ## T ### **TVM**+ Threat and Vulnerability Management, a built-in module in Microsoft Defender for Endpoint that can discover vulnerabilities and misconfigurations in near real time and prioritize vulnerabilities based on the threat landscape and detections in your organization.[Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md). ## W ### **WAF**+ Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities. Learn more about [WAF](../web-application-firewall/overview.md). ## Z ### **Zero-Trust**+ A new security model that assumes breach and verifies each request as though it originated from an uncontrolled network. Learn more about [Zero-Trust Security](../security/fundamentals/zero-trust.md).
-## Next Steps
+## Next steps
[Microsoft Defender for Cloud-overview](overview-page.md)
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
To learn more about implementation details such as supported operating systems,
### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a>
-When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless.
+When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless. These are the required components:
-The **Defender profile** deployed to each node provides the runtime protections and collects signals from nodes using [eBPF technology](https://ebpf.io/).
+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an AKS Security profile.
-The **Azure Policy add-on for Kubernetes** collects cluster and workload configuration for admission control policies as explained in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). The Azure Policy for Kubernetes pod is deployed as an AKS add-on.
:::image type="content" source="./media/defender-for-containers/architecture-aks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Azure Kubernetes Service, and Azure Policy." lightbox="./media/defender-for-containers/architecture-aks-cluster.png":::
-### Defender profile component details
+### Defender agent component details
| Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required | |--|--|--|--|--|--|--|
-| microsoft-defender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 296Mi<br> <br> cpu: 360m | No |
+| microsoft-defender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 296Mi<br> <br> cpu: 360m | No |
| microsoft-defender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
-| microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/outbound-rules-control-egress.md#microsoft-defender-for-containers) |
+| microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/outbound-rules-control-egress.md#microsoft-defender-for-containers) |
\* Resource limits aren't configurable; Learn more about [Kubernetes resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes)
For all clusters hosted outside of Azure, [Azure Arc-enabled Kubernetes](../azur
When a non-Azure container is connected to Azure with Arc, the [Arc extension](../azure-arc/kubernetes/extensions.md) collects Kubernetes audit logs data from all control plane nodes in the cluster. The extension sends the log data to the Microsoft Defender for Cloud backend in the cloud for further analysis. The extension is registered with a Log Analytics workspace used as a data pipeline, but the audit log data isn't stored in the Log Analytics workspace.
-Workload configuration information is collected by an Azure Policy add-on. As explained in [this Azure Policy for Kubernetes page](../governance/policy/concepts/policy-for-kubernetes.md), the add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). Kubernetes admission controllers are plugins that enforce how your clusters are used. The add-on registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner.
+Workload configuration information is collected by Azure Policy for Kubernetes. As explained in [this Azure Policy for Kubernetes page](../governance/policy/concepts/policy-for-kubernetes.md), the policy extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). Kubernetes admission controllers are plugins that enforce how your clusters are used. The add-on registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner.
> [!NOTE] > Defender for Containers support for Arc-enabled Kubernetes clusters is a preview feature.
These components are required in order to receive the full protection offered by
- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). -- **The Defender extension** ΓÇô The [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an Arc-enabled Kubernetes extension. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension.
-- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for AWS EKS clusters is a preview feature.
These components are required in order to receive the full protection offered by
- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). -- **The Defender extension** ΓÇô The [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an Arc-enabled Kubernetes extension. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension.
-- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for GCP GKE clusters is a preview feature.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
A full list of supported alerts is available in the [reference table of all Defe
:::image type="content" source="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png" alt-text="Sample alert from Microsoft Defender for Kubernetes." lightbox="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png"::: ::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" ::: zone-end ::: zone pivot="defender-for-container-aks"
A full list of supported alerts is available in the [reference table of all Defe
::: zone-end ::: zone pivot="defender-for-container-aks" ::: zone-end ## Learn more
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Learn more about:
Defender for Containers provides real-time threat protection for [supported containerized environments](support-matrix-defender-for-containers.md) and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
-Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. This means that security alerts are only triggered for actions and deployments that occur after you've enabled Defender for Containers on your subscription.
+Threat protection at the cluster level is provided by the [Defender agent](defender-for-cloud-glossary.md#defender-agent) and analysis of the Kubernetes audit logs. This means that security alerts are only triggered for actions and deployments that occur after you've enabled Defender for Containers on your subscription.
Examples of security events that Microsoft Defenders for Containers monitors include:
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Container vulnerability assessment powered by Qualys has the following capabilit
- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published. - **Rescan** is performed once every 7 days for: - images pulled in the last 30 days
- - images currently running on the Kubernetes clusters monitored by the Defender for Containers agent
+ - images currently running on the Kubernetes clusters monitored by the Defender agent
## Prerequisites
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
Previously updated : 11/28/2022 Last updated : 08/09/2023 # Enable Microsoft Defender for Azure Cosmos DB
Last updated 11/28/2022
## Enable database protection at the subscription level
-The subscription level enablement, enables Microsoft Defender for Cloud protection for all database types in your subscription (recommended).
+The subscription level enablement enables Microsoft Defender for Cloud protection for all database types in your subscription (recommended).
+
+You can enable Microsoft Defender for Cloud protection on your subscription in order to protect all database types, for example, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and OSS RDBs. You can also select specific resource types to protect when you configure your plan.
-You can enable Microsoft Defender for Cloud protection on your subscription in order to protect all database types, for example, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and OSS RDBs. You can also select specific resource types to protect when you configure your plan.
-
When you enable Microsoft Defender for Cloud's enhanced security features on your subscription, Microsoft Defender for Azure Cosmos DB is automatically enabled for all of your Azure Cosmos DB accounts. **To enable database protection at the subscription level**: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
-1. Select the relevant subscription.
+1. Select the relevant subscription.
1. Locate Databases and toggle the switch to **On**.
When you enable Microsoft Defender for Cloud's enhanced security features on you
1. Select **Save**.
-**To select specific resource types to protect when you configure your plan**:
+**To select specific resource types to protect when you configure your plan**:
1. Follow steps 1 - 4 above.
When you enable Microsoft Defender for Cloud's enhanced security features on you
## Enable Microsoft Defender for Azure Cosmos DB at the resource level
-You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB account through the Azure portal, PowerShell, or the Azure CLI.
+You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB account through the Azure portal, PowerShell, Azure CLI, ARM template, or Azure Policy.
**To enable Microsoft Defender for Cloud for a specific Azure Cosmos DB account**:
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
Enable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<Your subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.DocumentDb/databaseAccounts/myCosmosDBAccount/" ```
-1. Verify the Microsoft Defender for Azure Cosmos DB setting for your storage account through the PowerShell call [Get-AzSecurityAdvancedThreatProtection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection) command.
+1. Verify the Microsoft Defender for Azure Cosmos DB setting for your storage account through the PowerShell call [Get-AzSecurityAdvancedThreatProtection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection) command.
```powershell Get-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<Your subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.DocumentDb/databaseAccounts/myCosmosDBAccount/"
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/microsoft-defender-cosmosdb-create-account).
+### [Azure CLI](#tab/azure-cli)
+
+To enable Microsoft Defender for Azure Cosmos DB on a single account via Azure CLI, call the [az security atp cosmosdb update](/cli/azure/security/atp/cosmosdb) command. Remember to replace values in angle brackets with your own values:
+
+```azurecli
+az security atp cosmosdb update \
+ --resource-group <resource-group> \
+ --cosmosdb-account <cosmosdb-account> \
+ --is-enabled true
+```
+
+To check the Microsoft Defender for Azure Cosmos DB setting for a single account via Azure CLI, call the [az security atp cosmosdb show](/cli/azure/security/atp/cosmosdb) command. Remember to replace values in angle brackets with your own values:
+
+```azurecli
+az security atp cosmosdb show \
+ --resource-group <resource-group> \
+ --cosmosdb-account <cosmosdb-account>
+```
+
+### [Azure Policy](#tab/azure-policy)
+
+Use an Azure Policy to enable Microsoft Defender for Cloud across storage accounts under a specific subscription or resource group.
+
+1. Launch the Azure Policy > Definitions page.
+1. Search for the **Configure Microsoft Defender for Azure Cosmos DB to be enabled** policy, then select the policy to view the policy definition page.
+
+ :::image type="content" source="media/defender-for-databases-enable-cosmos-protections/select-policy.png" alt-text="Screenshot of selecting the policy.":::
+
+1. Select the **Assign button** for the built-in policy.
+
+ :::image type="content" source="media/defender-for-databases-enable-cosmos-protections/select-assign-button.png" alt-text="Screenshot of selecting the assign button.":::
+
+1. Specify an Azure subscription.
+
+ :::image type="content" source="media/defender-for-databases-enable-cosmos-protections/choose-subscription.png" alt-text="Screenshot of choosing Azure subscription.":::
+
+1. Select **Review + create** to review the policy assignment and complete it.
+ ## Simulate security alerts from Microsoft Defender for Azure Cosmos DB
-A full list of [supported alerts](alerts-reference.md) is available in the reference table of all Defender for Cloud security alerts.
+A full list of [supported alerts](alerts-reference.md#alerts-azurecosmos) is available in the reference table of all Defender for Cloud security alerts.
-You can use sample Microsoft Defender for Azure Cosmos DB alerts to evaluate their value, and capabilities. Sample alerts will also validate any configurations you've made for your security alerts (such as SIEM integrations, workflow automation, and email notifications).
+You can use sample Microsoft Defender for Azure Cosmos DB alerts to evaluate their value, and capabilities. Sample alerts will also validate any configurations you've made for your security alerts (such as SIEM integrations, workflow automation, and email notifications).
-**To create sample alerts from Microsoft Defender for Azure Cosmos DB**:
+**To create sample alerts from Microsoft Defender for Azure Cosmos DB**:
1. Sign in to the [Azure portal](https://portal.azure.com/) as a Subscription Contributor user.
-1. Navigate to the security alerts page.
+1. Navigate to the security alerts page.
-1. Select **Sample alerts**.
+1. Select **Sample alerts**.
-1. Select the subscription.
+1. Select the subscription.
-1. Select the relevant Microsoft Defender plan(s).
+1. Select the relevant Microsoft Defender plan(s).
1. Select **Create sample alerts**. :::image type="content" source="media/quickstart-enable-defender-for-cosmos/sample-alerts.png" alt-text="Screenshot showing the order needed to create an alert.":::
-After a few minutes, the alerts will appear in the security alerts page. Alerts will also appear anywhere that you've configured to receive your Microsoft Defender for Cloud security alerts. For example, connected SIEMs, and email notifications.
+After a few minutes, the alerts will appear in the security alerts page. Alerts will also appear anywhere that you've configured to receive your Microsoft Defender for Cloud security alerts. For example, connected SIEMs, and email notifications.
-## Next Steps
+## Next steps
In this article, you learned how to enable Microsoft Defender for Azure Cosmos DB, and how to simulate security alerts.
defender-for-cloud Enable Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment.md
A notification message pops up in the top right corner that will verify that the
## How to enable runtime coverage - For Defender for CSPM, use agentless discovery for Kubernetes. For more information, see [Onboard agentless container posture in Defender CSPM](how-to-enable-agentless-containers.md).-- For Defender for Containers, use the Defender for Containers agent. For more information, see [Deploy the Defender profile in Azure](tutorial-enable-containers-azure.md#deploy-the-defender-profile-in-azure).
+- For Defender for Containers, use the Defender agent. For more information, see [Deploy the Defender agent in Azure](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure).
- For Defender for Container Registries, there is no runtime coverage. ## Next steps
defender-for-cloud Episode Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eight.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Dolev Zemer joins Yuri Diogenes to talk about how Defender for IoT works. Dolev explains the difference between OT Security and IT Security and how Defender for IoT fulfills this gap. Dolev also demonstrates how Defender for IoT discovers devices to monitor and how it fits in the Microsoft Security portfolio.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=05fdecf5-f6a1-4162-b95d-1e34478d1d60" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=05fdecf5-f6a1-4162-b95d-1e34478d1d60]
- [1:20](/shows/mdc-in-the-field/defender-for-iot#time=01m20s) - Overview of the Defender for IoT solution
defender-for-cloud Episode Eighteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md
Last updated 04/27/2023
# Defender for Azure Cosmos DB | Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Haim Bendanan joins Yuri Diogenes to talk about Defender for Azure Cosmos DB. Haim explains the rationale behind the use of this plan to protect Azure Cosmos DB databases, the different threat detections that are available with this plan, and the security recommendations that were added. Haim also demonstrates how Defender for Azure Cosmos DB detects a SQL injection attack.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=94238ff5-930e-48be-ad27-a2fff73e473f" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=94238ff5-930e-48be-ad27-a2fff73e473f]
- [00:00](/shows/mdc-in-the-field/defender-cosmos-db#time=00m00s) - Intro
defender-for-cloud Episode Eleven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eleven.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Yossi Weizman joins Yuri Diogenes to talk about the evolution of the threat matrix for Containers and how attacks against Kubernetes have evolved. Yossi also demonstrates new detections that are available for different attacks and how Defender for Containers can help to quickly identify malicious activities in containers.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=646c2b9a-3f15-4705-af23-7802bd9549c5" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=646c2b9a-3f15-4705-af23-7802bd9549c5]
- [01:15](/shows/mdc-in-the-field/threat-landscape-containers#time=01m15s) - The evolution of attacks against Kubernetes
Learn how to [detect identity attacks in Kubernetes](https://techcommunity.micro
## Next steps > [!div class="nextstepaction"]
-> [Enhanced workload protection features in Defender for Servers](episode-twelve.md)
+> [Enhanced workload protection features in Defender for Servers](episode-twelve.md)
defender-for-cloud Episode Fifteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fifteen.md
Last updated 04/27/2023
# Remediate security recommendations with governance
-**Episode description**: In this episode of Defender for Cloud in the Field, Amit Biton joins Yuri Diogenes to talk about the new governance feature in Defender for Cloud. Amit explains the rationale behind this feature. Amit explains why it's important to have governance in place in order to drive security posture improvement and how this feature can help with that. Amit demonstrates how to create governance rules, how to monitor and take action to improve the secure score.
+**Episode description**: In this episode of Defender for Cloud in the Field, Amit Biton joins Yuri Diogenes to talk about the new governance feature in Defender for Cloud. Amit explains the rationale behind this feature. Amit explains why it's important to have governance in place in order to drive security posture improvement and how this feature can help with that. Amit demonstrates how to create governance rules, how to monitor and take action to improve the secure score.
-
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=ceb3ef0e-257a-466a-9e90-dcfb08f54f8e" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=ceb3ef0e-257a-466a-9e90-dcfb08f54f8e]
- [01:14](/shows/mdc-in-the-field/remediate-security-with-governance#time=01m14s) - What is the Governance feature?
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Defender for Servers integration with Microsoft Defender for Endpoint](episode-sixteen.md)
+> [Defender for Servers integration with Microsoft Defender for Endpoint](episode-sixteen.md)
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with Microsoft Defender Vulnerability Management (formerly TVM). Aviv explains how this new integration with Defender Vulnerability Management works and the advantages of this integration. Aviv covers the easy experience to onboard, software inventory, the integration with MDE for Linux, and the Defender for Servers support for the new multicloud connector for AWS.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=f62e1199-d0a8-4801-9793-5318fde27497" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=f62e1199-d0a8-4801-9793-5318fde27497]
- [1:22](/shows/mdc-in-the-field/defender-for-servers#time=01m22s) - Overview of the announcements for Microsoft Defender for Servers
Learn how to [Investigate weaknesses with Microsoft Defender Vulnerability Manag
## Next steps > [!div class="nextstepaction"]
-> [Lessons Learned from the Field](episode-six.md)
+> [Lessons Learned from the Field](episode-six.md)
defender-for-cloud Episode Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-four.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the field, Lior Arviv joins Yuri Diogenes to talk about the cloud security posture management improvements in Microsoft Defender for Cloud. Lior explains the MITRE ATT&CK Framework integration with recommendations, the overall improvements of recommendations and the other fields added in the API. Lior also demonstrates the different ways to access the MITRE ATT&CK integration via filters and recommendations.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=845108fd-e57d-40e0-808a-1239e78a7390" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=845108fd-e57d-40e0-808a-1239e78a7390]
- [1:24](/shows/mdc-in-the-field/defender-for-containers#time=01m24s) - Security recommendation refresh time changes
Learn how to [Review your security recommendations](review-security-recommendati
## Next steps > [!div class="nextstepaction"]
-> [Microsoft Defender for Servers](episode-five.md)
+> [Microsoft Defender for Servers](episode-five.md)
defender-for-cloud Episode Fourteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fourteen.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Ortal Parpara joins Yuri Diogenes to talk about the options to deploy Defender for Servers in AWS and GCP. Ortal talks about the new capability that allows you to select a different Defender for Server plan per connector, demonstrates how to customize the deployment and how this feature helps to deploy Azure Arc.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=2426d341-bdb6-4795-bc08-179cfe7b99ba" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=2426d341-bdb6-4795-bc08-179cfe7b99ba]
- [00:00](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=00m00s) - Introduction
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Remediate Security Recommendations with Governance](episode-fifteen.md)
+> [Remediate Security Recommendations with Governance](episode-fifteen.md)
defender-for-cloud Episode Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nine.md
Last updated 04/27/2023
# Microsoft Defender for Containers in a Multicloud Environment **Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers implementation in AWS and GCP.
-
-Maya explains about the new workload protection capabilities related to Containers when they're deployed in a multicloud environment. Maya also demonstrates the onboarding experience in GCP and how to visualize security recommendations across AWS, GCP, and Azure in a single dashboard.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=f9470496-abe3-4344-8160-d6a6b65c077f" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+Maya explains about the new workload protection capabilities related to Containers when they're deployed in a multicloud environment. Maya also demonstrates the onboarding experience in GCP and how to visualize security recommendations across AWS, GCP, and Azure in a single dashboard.
+
+> [!VIDEO https://aka.ms/docs/player?id=f9470496-abe3-4344-8160-d6a6b65c077f]
- [01:12](/shows/mdc-in-the-field/containers-multi-cloud#time=01m12s) - Container protection in a multicloud environment
Maya explains about the new workload protection capabilities related to Containe
- [10:25](/shows/mdc-in-the-field/containers-multi-cloud#time=10m25s) - Demonstration ## Recommended resources
-
+ Learn how to [Enable Microsoft Defender for Containers](defender-for-containers-enable.md). - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity) -- Follow us on social media:
+- Follow us on social media:
[LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F) [Twitter](https://twitter.com/msftsecurity)
Learn how to [Enable Microsoft Defender for Containers](defender-for-containers-
## Next steps > [!div class="nextstepaction"]
-> [Protecting Containers in GCP with Defender for Containers](episode-ten.md)
+> [Protecting Containers in GCP with Defender for Containers](episode-ten.md)
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
Last updated 04/27/2023
# Defender for DevOps | Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Sukhandeep Singh joins Yuri Diogenes to talk about Defender for DevOps. Sukhandeep explains how Defender for DevOps uses a central console to provide security teams DevOps insights across multi-pipeline environments, such as GitHub and Azure DevOps. Sukhandeep also covers the security recommendations created by Defender for DevOps and demonstrates how to configure a GitHub connector using Defender for Cloud dashboard.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=f1e5ec4f-1e65-400d-915b-4db6cf550014" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=f1e5ec4f-1e65-400d-915b-4db6cf550014]
- [01:16](/shows/mdc-in-the-field/defender-for-devops#time=01m16s) - What is Defender for DevOps?
defender-for-cloud Episode One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-one.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the field, Or Serok joins Yuri Diogenes to share the new AWS connector in Microsoft Defender for Cloud, which was released at Ignite 2021. Or explains the use case scenarios for the new connector and how the new connector work. She demonstrates the onboarding process to connect AWS with Microsoft Defender for Cloud and talks about the centralized management of all security recommendations.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=26cbaec8-0f3f-4bb1-9918-1bf7d912db57" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=26cbaec8-0f3f-4bb1-9918-1bf7d912db57]
- [00:00](/shows/mdc-in-the-field/aws-connector) - Introduction
Learn more about the new [AWS connector](quickstart-onboard-aws.md)
## Next steps > [!div class="nextstepaction"]
-> [Integrate Azure Purview with Microsoft Defender for Cloud](episode-two.md)
+> [Integrate Azure Purview with Microsoft Defender for Cloud](episode-two.md)
defender-for-cloud Episode Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seven.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the field, Or Serok joins Yuri Diogenes to share the new GCP Connector in Microsoft Defender for Cloud. Or explains the use case scenarios for the new connector and how the new connector works. She demonstrates the onboarding process to connect GCP with Microsoft Defender for Cloud and talks about custom assessment and the CSPM experience for multicloud
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=80ba04f0-1551-48f3-94a2-d2e82e7073c9" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=80ba04f0-1551-48f3-94a2-d2e82e7073c9]
- [1:23](/shows/mdc-in-the-field/gcp-connector#time=01m23s) - Overview of the new GCP connector
Learn more how to [Connect your GCP projects to Microsoft Defender for Cloud](qu
## Next steps > [!div class="nextstepaction"]
-> [Microsoft Defender for IoT](episode-eight.md)
+> [Microsoft Defender for IoT](episode-eight.md)
defender-for-cloud Episode Seventeen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seventeen.md
Last updated 04/27/2023
# Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Bar Brownshtein joins Yuri Diogenes to talk about the new Defender for Cloud integration with Microsoft Entra. Bar explains the rationale behind this integration, the importance of having everything in a single dashboard and how this integration works. Bar also covers the recommendations that are generated by this integration and demonstrate the experience in the dashboard.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=96a0ecdb-b1c3-423f-9ff1-47fcc5d6ab1b" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=96a0ecdb-b1c3-423f-9ff1-47fcc5d6ab1b]
- [00:00](/shows/mdc-in-the-field/integrate-entra#time=00m0s) - Defender for Cloud integration with Microsoft Entra
Learn more about [Entra Permission Management](other-threat-protections.md#entra
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-eighteen.md)
+> [New AWS Connector in Microsoft Defender for Cloud](episode-eighteen.md)
defender-for-cloud Episode Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-six.md
Last updated 04/27/2023
Carlos also covers how Microsoft Defender for Cloud is used to fill the gap between cloud security posture management and cloud workload protection, and demonstrates some features related to this scenario.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=3811455b-cc20-4ee0-b1bf-9d4df5ee4eaf" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=3811455b-cc20-4ee0-b1bf-9d4df5ee4eaf]
- [1:30](/shows/mdc-in-the-field/lessons-from-the-field#time=01m30s) - Why Microsoft Defender for Cloud is a unique solution when compared with other competitors?
defender-for-cloud Episode Sixteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-sixteen.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Erel Hansav joins Yuri Diogenes to talk about the latest updates regarding the Defender for Servers integration with Microsoft Defender for Endpoint. Erel explains the architecture of this integration for the different versions of Windows Servers, how this integration takes place in the backend, the deployment options for Windows and Linux and the deployment at scale using Azure Policy. -
-<iframe src="https://aka.ms/docs/player?id=aaf5dbcd-9a29-40c2-b355-8c832b27baa5" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=aaf5dbcd-9a29-40c2-b355-8c832b27baa5]
- [00:0](/shows/mdc-in-the-field/servers-med-integration#time=00m00s) - Introduction
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field](episode-seventeen.md)
+> [Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field](episode-seventeen.md)
defender-for-cloud Episode Ten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-ten.md
Last updated 04/27/2023
# Protecting containers in GCP with Defender for Containers **Episode description**: In this episode of Defender for Cloud in the field, Nadav Wolfin joins Yuri Diogenes to talk about how to use Defender for Containers to protect Containers that are located at Google Cloud (GCP).
-
-Nadav gives insights about workload protection for GKE and how to obtain visibility of this type of workload across Azure and AWS. Nadav also demonstrates the overall onboarding experience and provides an overview of the architecture of this solution.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=078af1f2-1f12-4030-bd3f-3e7616150562" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+Nadav gives insights about workload protection for GKE and how to obtain visibility of this type of workload across Azure and AWS. Nadav also demonstrates the overall onboarding experience and provides an overview of the architecture of this solution.
+
+> [!VIDEO https://aka.ms/docs/player?id=078af1f2-1f12-4030-bd3f-3e7616150562]
- [00:55](/shows/mdc-in-the-field/gcp-containers#time=00m55s) - Architecture solution for Defender for Containers and support for GKE
Learn how to [Enable Microsoft Defender for Containers](defender-for-containers-
## Next steps > [!div class="nextstepaction"]
-> [Threat landscape for Containers](episode-eleven.md)
+> [Threat landscape for Containers](episode-eleven.md)
defender-for-cloud Episode Thirteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirteen.md
Last updated 04/27/2023
# Defender for Storage **Episode description**: In this episode of Defender for Cloud in the Field, Eitan Shteinberg joins Yuri Diogenes to talk about the threat landscape for Azure Storage and how Defender for Storage can help detect and mitigate these threats.
-
- Eitan talks about different use case scenarios, best practices to deploy Defender for Storage and he also demonstrates how to investigate an alert generated by Defender for Storage.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=79f69cee-ae56-4ce3-9443-0f45e5c3ccf4" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+ Eitan talks about different use case scenarios, best practices to deploy Defender for Storage and he also demonstrates how to investigate an alert generated by Defender for Storage.
+
+> [!VIDEO https://aka.ms/docs/player?id=79f69cee-ae56-4ce3-9443-0f45e5c3ccf4]
- [01:00](/shows/mdc-in-the-field/defender-for-storage#time=01m00s) - Current threats for Cloud Storage workloads
Last updated 04/27/2023
- [32:15](/shows/mdc-in-the-field/defender-for-storage#time=32m15s) - What's coming next ## Recommended resources
-
+ [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md). - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Defender for Servers deployment in AWS and GCP](episode-fourteen.md)
+> [Defender for Servers deployment in AWS and GCP](episode-fourteen.md)
defender-for-cloud Episode Thirty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-five.md
Last updated 08/08/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Daniel Davrayev joins Yuri Diogenes to talk about security alert correlation capability in Defender for Cloud. Daniel talks about the importance of have a built-in capability to correlate alerts in Defender for Cloud, how this saves time for SOC analysts to investigate alert and respond to potential threats. Daniel also explains how data correlation works and demonstrate how this correlation appears in Defender for Cloud dashboard as a security incident.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=6573561d-70a6-4b4c-ad16-9efe747c9a61" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=6573561d-70a6-4b4c-ad16-9efe747c9a61]
- [00:00](/shows/mdc-in-the-field/security-alert-correlation#time=00m00s) - Intro - [02:15](/shows/mdc-in-the-field/security-alert-correlation#time=02m15s) - How Defender for Cloud handles alert prioritization
Last updated 08/08/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Thirty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-four.md
Last updated 06/21/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Ariel Brukman joins Yuri Diogenes to talk about the DevOps Threat Matrix. Ariel talks about the process of creating a new threat matrix for a very complex domain such as DevOps, what it was found during the research process and how the research evolved to create this threat matrix. Ariel also talks about how to use the threat matrix to improve your DevOps defenses, and he gives examples of some common attacks against DevOps environments.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=20631aa4-501c-4fa6-bd9c-eadab45887af" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=20631aa4-501c-4fa6-bd9c-eadab45887af]
- [02:49](/shows/mdc-in-the-field/devops-threat-matrix#time=02m49s) - The research leading to DevOps Matrix publication - [05:35](/shows/mdc-in-the-field/devops-threat-matrix#time=05m35s) - Threats in the execution phase
Last updated 06/21/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Thirty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-one.md
Title: Understanding data aware security posture capability | Defender for Cloud in the field
+ Title: Understanding data aware security posture capability | Defender for Cloud in the field
description: Learn about data aware security posture capabilities in Defender CSPM
Last updated 05/16/2023
# Understanding data aware security posture capability **Episode description**: In this episode of Defender for Cloud in the Field, Tzach Kaufmann joins Yuri Diogenes to talk about data aware security posture capability as part of Defender CSPM. Tzach explains the importance of having data aware security posture capability to help security admins with risk prioritization. Tzach also demonstrates the step-by-step process to onboard this capability and demonstrates how to obtain the insights using Attack Path.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=dd11ab78-d945-4727-a4e4-cf19eb1922f2" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=dd11ab78-d945-4727-a4e4-cf19eb1922f2]
- [00:00](/shows/mdc-in-the-field/data-aware-security-posture#time=00m00s) - Intro - [02:00](/shows/mdc-in-the-field/data-aware-security-posture#time=02m00s) - What is Data Aware Security Posture?
Last updated 05/16/2023
- [05:00](/shows/mdc-in-the-field/data-aware-security-posture#time=05m00s) - Sensitive labels discovery process - [07:05](/shows/mdc-in-the-field/data-aware-security-posture#time=07m05s) - What's the difference between Data Aware Security Posture and Microsoft Purview? - [11:35](/shows/mdc-in-the-field/data-aware-security-posture#time=11m35s) - Demonstration
-
+ ## Recommended resources
- - Learn more about [Data Aware Security Posture](concept-data-security-posture.md)
+ - Learn more about [Data Aware Security Posture](concept-data-security-posture.md)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Learn more about [Microsoft Security](https://msft.it/6002T9HQY) - Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 05/16/2023
## Next steps > [!div class="nextstepaction"]
-> [API Security with Defender for APIs](episode-thirty-two.md)
+> [API Security with Defender for APIs](episode-thirty-two.md)
defender-for-cloud Episode Thirty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-three.md
Last updated 06/13/2023
**Episode description**: In this episode of Defender for Cloud in the Field, Shani Freund Menscher joins Yuri Diogenes to talk about a new capability in Defender CSPM called Agentless Container Posture Management. Shani explains how Agentless Container Posture Management works, how to onboard, and how to leverage this feature to obtain more insights into the container's security. Shani also demonstrates how to visualize this information using Attack Path and Cloud Security Explorer.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=abceb157-b850-42f0-8b83-92cbef16c893" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=abceb157-b850-42f0-8b83-92cbef16c893]
- [01:48](/shows/mdc-in-the-field/agentless-container-posture-management#time=01m48s) - Overview of Defender CSPM - [03:06](/shows/mdc-in-the-field/agentless-container-posture-management#time=03m06s) - What container capabilities are included in Defender CSPM
Last updated 06/13/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Thirty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-two.md
Last updated 06/08/2023
# API Security with Defender for APIs **Episode description**: In this episode of Defender for Cloud in the Field, Preetham Naik joins Yuri Diogenes to talk about API security with Defender for APIs. Preetham explains the importance of API security and why the threats in this area are growing. Preetham introduces the new Defender for APIs plan released in public preview and gives an overview of all its capabilities. Preetham also demonstrates the step-by-step process to onboard this plan and demonstrates how to address API security recommendations.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=657f8b1b-8072-4075-a244-07c93ecf6556" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=657f8b1b-8072-4075-a244-07c93ecf6556]
- [02:15](/shows/mdc-in-the-field/api-security#time=02m15s) - Why is API Security important? - [05:15](/shows/mdc-in-the-field/api-security#time=05m15s) - The state of the API Security Market - [07:06](/shows/mdc-in-the-field/api-security#time=07m06s) - What are the risks associated with API? - [11:25](/shows/mdc-in-the-field/api-security#time=11m25s) - What you should expect from Defender for APIs - [15:53](/shows/mdc-in-the-field/api-security#time=15m53s) - Demonstration-- + ## Recommended resources - Learn more about [Defender for APIs](defender-for-apis-introduction.md) - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
Last updated 06/08/2023
## Next steps > [!div class="nextstepaction"]
-> [Agentless Container Posture Management in Defender for Cloud](episode-thirty-three.md)
+> [Agentless Container Posture Management in Defender for Cloud](episode-thirty-three.md)
defender-for-cloud Episode Thirty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty.md
Last updated 05/14/2023
# New Custom Recommendations for AWS and GCP in Defender for Cloud **Episode description**: In this episode of Defender for Cloud in the Field, Yael Genut joins Yuri Diogenes to talk about the new custom recommendations for AWS and GCP. Yael explains the importance of creating custom recommendations in a multicloud environment and how to use Kusto Query Language to create these customizations. Yael also demonstrates the step-by-step process to create custom recommendations using this new capability and how these custom recommendations appear in the Defender for Cloud dashboard.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=41612fbe-4c9c-4cd2-9a99-3fbd94d31bec" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=41612fbe-4c9c-4cd2-9a99-3fbd94d31bec]
- [01:44](/shows/mdc-in-the-field/new-custom-recommendations#time=01m44s) - Understanding custom recommendations - [03:15](/shows/mdc-in-the-field/new-custom-recommendations#time=03m15s) - Creating a custom recommendation based on a template
Last updated 05/14/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 05/14/2023
## Next steps > [!div class="nextstepaction"]
-> [Understanding data aware security posture capability](episode-thirty-one.md)
+> [Understanding data aware security posture capability](episode-thirty-one.md)
defender-for-cloud Episode Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-three.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers. Maya explains what's new in Microsoft Defender for Containers, the new capabilities that are available, the new pricing model, and the multicloud coverage. Maya also demonstrates the overall experience of Microsoft Defender for Containers from the recommendations to the alerts that you may receive.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=b8624912-ef9e-4fc6-8c0c-ea65e86d9128" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=b8624912-ef9e-4fc6-8c0c-ea65e86d9128]
- [1:09](/shows/mdc-in-the-field/defender-for-containers#time=01m09s) - What's new in the Defender for Containers plan?
defender-for-cloud Episode Twelve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twelve.md
Last updated 04/27/2023
Netta explains how Defender for Servers applies Azure Arc as a bridge to onboard non-Azure VMs as she demonstrates what the experience looks like.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=18fdbe74-4399-44fe-81e7-3e3ce92df451" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=18fdbe74-4399-44fe-81e7-3e3ce92df451]
- [00:55](/shows/mdc-in-the-field/enhanced-workload-protection#time=00m55s) - Arc Auto-provisioning in GCP
Introduce yourself to [Microsoft Defender for Servers](defender-for-servers-intr
## Next steps > [!div class="nextstepaction"]
-> [Defender for Storage](episode-thirteen.md)
+> [Defender for Storage](episode-thirteen.md)
defender-for-cloud Episode Twenty Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-eight.md
Last updated 04/27/2023
# Zero Trust and Defender for Cloud | Defender for Cloud in the field **Episode description**: In this episode of Defender for Cloud in the Field, Mekonnen Kassa joins Yuri Diogenes to discuss the importance of using Zero Trust. Mekonnen covers the principles of Zero Trust, the importance of switching your mindset to adopt this strategy and how Defender for Cloud can help. Mekonnen also talks about best practices to get started, visibility and analytics as part of Zero Trust, and what tools can be leveraged to achieve it.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=125af768-01bd-45ac-8503-4dba5eb53ff7" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=125af768-01bd-45ac-8503-4dba5eb53ff7]
- [01:21](/shows/mdc-in-the-field/zero-trust#time=01m21s) - What is Zero Trust? - [04:12](/shows/mdc-in-the-field/zero-trust#time=04m12s) - Current challenges with multicloud and hybrid workloads
Last updated 04/27/2023
- [14:50](/shows/mdc-in-the-field/zero-trust#time=14m50s) - Visibility and Analytics for Zero Trust - [18:09](/shows/mdc-in-the-field/zero-trust#time=18m09s) - Final recommendations to start your Zero Trust journey - ## Recommended resources - Learn more about [Zero Trust](https://www.microsoft.com/security/business/zero-trust) - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
Last updated 04/27/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Security Policy Enhancements in Defender for Cloud](episode-twenty-nine.md)
+> [Security Policy Enhancements in Defender for Cloud](episode-twenty-nine.md)
defender-for-cloud Episode Twenty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-five.md
Last updated 04/27/2023
# AWS ECR Coverage in Defender for Containers | Defender for Cloud in the field **Episode description**: In this episode of Defender for Cloud in the Field, Tomer Spivak joins Yuri Diogenes to talk about the new AWS ECR coverage in Defender for Containers. Tomer explains how Defender for Containers performs vulnerability assessment for ECR workloads in AWS and how to enable this capability. Tomer demonstrates the user experience in Defender for Cloud, showing the vulnerability findings in the dashboard and the onboarding process.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=919f847f-4b19-4440-aede-a0917e1d7019" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=919f847f-4b19-4440-aede-a0917e1d7019]
- [00:00](/shows/mdc-in-the-field/aws-ecr#time=00m00s) - Intro - [01:44](/shows/mdc-in-the-field/aws-ecr#time=01m44s) - Introducing AWS ECR coverage
Last updated 04/27/2023
- [04:22](/shows/mdc-in-the-field/aws-ecr#time=04m22s) - Scanning frequency - [07:33](/shows/mdc-in-the-field/aws-ecr#time=07m33s) - Demonstration - ## Recommended resources - [Learn more](defender-for-containers-vulnerability-assessment-elastic.md) about AWS ECR Coverage in Defender for Containers. - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Governance capability improvements in Defender for Cloud](episode-twenty-six.md)
+> [Governance capability improvements in Defender for Cloud](episode-twenty-six.md)
defender-for-cloud Episode Twenty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-four.md
Last updated 04/27/2023
# Enhancements in Defender for SQL vulnerability assessment | Defender for Cloud in the field **Episode description**: In this episode of Defender for Cloud in the Field, Catalin Esanu joins Yuri Diogenes to talk about the enhancements in Defender for SQL Vulnerability Assessment (VA) capability that were announced. Catalin explains how the new SQL VA Express changed to allow a frictionless onboarding experience and how it became easier to manage VA baselines. Catalin demonstrates how to enable this experience and how to customize the baseline with companion scripts.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=cbd8ace6-4602-4900-bb73-cf8986605639" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=cbd8ace6-4602-4900-bb73-cf8986605639]
- [01:23](/shows/mdc-in-the-field/defender-sql-enhancements#time=01m23s) - Architecture change in SQL VA - [05:30](/shows/mdc-in-the-field/defender-sql-enhancements#time=05m30s) - Enabling SQL VA Express
Last updated 04/27/2023
- [08:49](/shows/mdc-in-the-field/defender-sql-enhancements#time=08m49s) - Other additions to SQL VA Express - [12:56](/shows/mdc-in-the-field/defender-sql-enhancements#time=12m56s) - Demonstration - ## Recommended resources - [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-express-configuration-for-vulnerability-assessment-in/ba-p/3695390) about Defender for SQL Vulnerability Assessment (VA). - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
Last updated 04/27/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [AWS ECR Coverage in Defender for Containers](episode-twenty-five.md)
+> [AWS ECR Coverage in Defender for Containers](episode-twenty-five.md)
defender-for-cloud Episode Twenty Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-nine.md
Last updated 04/27/2023
# Security policy enhancements in Defender for Cloud **Episode description**: In this episode of Defender for Cloud in the field, Tuval Rozner joins Yuri Diogenes to talk about the new security policy enhancements. Tuval covers the new security policy dashboard within Defender for Cloud, how to filter, and create exemptions from a single place without having to make changes in the Azure Policy dashboard. Tuval also demonstrates how to use the new dashboard and customize policies.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=1145810e-fc14-4d73-8d63-ea861aefb30b" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=1145810e-fc14-4d73-8d63-ea861aefb30b]
- [01:21](/shows/mdc-in-the-field/security-policy#time=01m21s) - The rationale behind changing the security policy assignment experience - [02:20](/shows/mdc-in-the-field/security-policy#time=02m20s) - What's new in the security policy assignment in Defender for Cloud?
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [New Custom Recommendations for AWS and GCP in Defender for Cloud](episode-thirty.md)
+> [New Custom Recommendations for AWS and GCP in Defender for Cloud](episode-thirty.md)
defender-for-cloud Episode Twenty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-one.md
Last updated 04/27/2023
# Latest updates in the regulatory compliance dashboard| Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Ronit Reger joins Yuri Diogenes to talk about the latest updates in the regulatory compliance dashboard that were released at Ignite. Ronit talks about the new attestation capability and the new Microsoft cloud security benchmark. Ronit also demonstrates how to create manual attestations in the regulatory compliance dashboard.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=b4aff57d-737e-4bf7-8748-4220131b730c" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=b4aff57d-737e-4bf7-8748-4220131b730c]
- [00:00](/shows/mdc-in-the-field/update-regulatory#time=00m00s) - Intro
Last updated 04/27/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
defender-for-cloud Episode Twenty Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-seven.md
Last updated 04/27/2023
# Demystifying Defender for Servers | Defender for Cloud in the field **Episode description**: In this episode of Defender for Cloud in the Field, Tom Janetscheck joins Yuri Diogenes to talk about the different deployment options in Defender for Servers. Tom covers the different agents available and the scenarios that will be most used for each agent, including the agentless feature. Tom also talks about the different vulnerability assessment solutions available, and how to deploy Defender for Servers at scale via policy or custom automation.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=dd9d789d-6685-47f1-9947-d31966aa4372" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=dd9d789d-6685-47f1-9947-d31966aa4372]
- [02:14](/shows/mdc-in-the-field/demystify-servers#time=02m14s) - Understanding Defender for Servers P1 and P2 - [06:15](/shows/mdc-in-the-field/demystify-servers#time=06m15s) - Pricing model
Last updated 04/27/2023
- [17:11](/shows/mdc-in-the-field/demystify-servers#time=17m11s) - The case for agentless implementation - [22:52](/shows/mdc-in-the-field/demystify-servers#time=22m52s) - Deploying Defender for Servers at scale - ## Recommended resources - Learn more about [Defender for Servers](plan-defender-for-servers.md) - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
Last updated 04/27/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Zero Trust and Defender for Cloud](episode-twenty-eight.md)
+> [Zero Trust and Defender for Cloud](episode-twenty-eight.md)
defender-for-cloud Episode Twenty Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-six.md
Last updated 04/27/2023
# Governance capability improvements in Defender for Cloud | Defender for Cloud in the field **Episode description**: In this episode of Defender for Cloud in the Field, Lior Arviv joins Yuri Diogenes to talk about the Governance capability improvements in Defender for Cloud. Lior gives a quick recap of the business need for governance and covers the new at scale governance capability. Lior demonstrates how to deploy governance at scale and how to monitor rules assignments and define priorities.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=b1581d03-6575-4f13-b2ed-5b0c22d80c63" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=b1581d03-6575-4f13-b2ed-5b0c22d80c63]
- [01:13](/shows/mdc-in-the-field/governance-improvements#time=01m13s) - Reviewing the need for cloud security governance - [04:10](/shows/mdc-in-the-field/governance-improvements#time=04m10s) - Governance at scale
Last updated 04/27/2023
- [07:45](/shows/mdc-in-the-field/governance-improvements#time=07m45s) - Demonstration - [19:00](/shows/mdc-in-the-field/governance-improvements#time=19m00s) - Learn more about governance - ## Recommended resources - Learn how to [drive your organization to remediate security recommendations with governance](governance-rules.md) - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
Last updated 04/27/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Demystifying Defender for Servers](episode-twenty-seven.md)
+> [Demystifying Defender for Servers](episode-twenty-seven.md)
defender-for-cloud Episode Twenty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-three.md
Last updated 04/27/2023
# Defender threat Intelligence | Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Alexandra Roland joins Yuri Diogenes to talk about Microsoft Defender Threat Intelligence (Defender TI). Alexandra explains how Defender TI works and how it integrates with Defender EASM. Alexandra goes over an end-to-end scenario to demonstrate how to use Defender TI to perform a security investigation based on the data collected by the platform.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=adfb8027-21ca-4bd0-9e54-28b0d642558a" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=adfb8027-21ca-4bd0-9e54-28b0d642558a]
- [04:09](/shows/mdc-in-the-field/threat-intelligence#time=04m09s) - How Defender for Cloud leverages Defender TI
defender-for-cloud Episode Twenty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-two.md
Last updated 04/27/2023
# Defender EASM | Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Jamil Mirza joins Yuri Diogenes to talk about Microsoft Defender External Attack Surface Management (Defender EASM). Jamil explains how Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. Jamil also covers the integration with Defender for Cloud, how it works, and he demonstrates different capabilities available in Defender EASM.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=5a3e2eab-52ce-4527-94e0-baae1b9cc81d" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=5a3e2eab-52ce-4527-94e0-baae1b9cc81d]
- [01:11](/shows/mdc-in-the-field/defender-easm#time=01m11s) - What is Defender EASM?
Last updated 04/27/2023
- [11:51](/shows/mdc-in-the-field/security-explorer#time=11m51s) - Demonstration - ## Recommended resources - [Learn more](concept-easm.md) about external attack surface management. - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Last updated 04/27/2023
# Cloud security explorer and attack path analysis | Defender for Cloud in the Field **Episode description**: In this episode of Defender for Cloud in the Field, Tal Rosler joins Yuri Diogenes to talk about cloud security explorer and attack path analysis, two new capabilities in Defender CSPM that were released at Ignite. The talk explains the rationale behind creating these features and how to use these features to prioritize what is more important to keep your environment more secure. Tal also demonstrates how to use these capabilities to quickly identify vulnerabilities and misconfigurations in cloud workloads.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=ce442350-7fab-40c0-b934-d93027b00853" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+> [!VIDEO https://aka.ms/docs/player?id=ce442350-7fab-40c0-b934-d93027b00853]
- [01:27](/shows/mdc-in-the-field/security-explorer#time=01m27s) - The business case for cloud security graph
Last updated 04/27/2023
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- [Twitter](https://twitter.com/msftsecurity) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
defender-for-cloud Episode Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-two.md
Last updated 04/27/2023
**Episode description**: In this episode of Defender for Cloud in the field, David Trigano joins Yuri Diogenes to share the new integration of Microsoft Defender for Cloud with Microsoft Purview, which was released at Ignite 2021.
-David explains the use case scenarios for this integration and how the data classification is done by Microsoft Purview can help prioritize recommendations and alerts in Defender for Cloud. David also demonstrates the overall experience of data enrichment based on the information that flows from Microsoft Purview to Defender for Cloud.
+David explains the use case scenarios for this integration and how the data classification is done by Microsoft Purview can help prioritize recommendations and alerts in Defender for Cloud. David also demonstrates the overall experience of data enrichment based on the information that flows from Microsoft Purview to Defender for Cloud.
-<br>
-<br>
-<iframe src="https://aka.ms/docs/player?id=9b911e9c-e933-4b7b-908a-5fd614f822c7" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=9b911e9c-e933-4b7b-908a-5fd614f822c7]
- [1:36](/shows/mdc-in-the-field/integrate-with-purview) - Overview of Microsoft Purview
Learn more about the [integration with Microsoft Purview](information-protection
## Next steps > [!div class="nextstepaction"]
-> [Watch Episode 3](episode-three.md)
+> [Watch Episode 3](episode-three.md)
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Title: Build queries with cloud security explorer
description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 05/16/2023 Last updated : 08/10/2023 # Build queries with cloud security explorer
With the cloud security explorer, you can query all of your security issues and
Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
+## Availability
+
+| Aspect | Details |
+|--|--|
+| Release state | GA (General Availability) |
+| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled<br>- Defender for Servers P2 customers can use the explorer UI to query for keys and secrets, but must have Defender CSPM enabled to get the full value of the Explorer. |
+| Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+ ## Prerequisites - You must [enable Defender CSPM](enable-enhanced-security.md).
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Last updated 07/20/2023
-# Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
+# Protect your servers with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
With Microsoft Defender for Servers, you gain access to and can deploy [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) to your server resources. Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint security solution. The main features include:
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
This page describes how to use Microsoft Defender for Cloud's set of security re
## Set up your workload protection
-Microsoft Defender for Cloud includes a bundle of recommendations that are available once you've installed the **Azure Policy add-on/extension for Kubernetes**.
+Microsoft Defender for Cloud includes a bundle of recommendations that are available once you've installed the **[Azure Policy for Kubernetes](defender-for-cloud-glossary.md#azure-policy-for-kubernetes)**.
## Prerequisites
Microsoft Defender for Cloud includes a bundle of recommendations that are avail
## Enable Kubernetes data plane hardening
-You can enable the Azure policy for Kubernetes by one of two ways:
+You can enable the Azure Policy for Kubernetes by one of two ways:
- Enable for all current and future clusters using plan/connector settings - [Enabling for Azure subscriptions or on-premises](#enabling-for-azure-subscriptions-or-on-premises)
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
Data is collected using:
- [Azure Monitor Agent](auto-deploy-azure-monitoring-agent.md) (AMA) - [Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) (MDE) - [Log Analytics agent](working-with-log-analytics-agent.md)-- **Security components**, such as the [Azure Policy Add-on for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md)
+- **Security components**, such as the [Azure Policy for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md)
## Why use Defender for Cloud to deploy monitoring components?
These plans use monitoring components to collect data:
- Automatic SQL server discovery and registration - Defender for Containers - [Azure Arc agent](../azure-arc/servers/manage-vm-extensions.md) (For multicloud and on-premises servers)
- - [Defender profile, Azure Policy Extension, Kubernetes audit log data](defender-for-containers-introduction.md)
+ - [Defender agent, Azure Policy for Kubernetes, Kubernetes audit log data](defender-for-containers-introduction.md)
## Availability of extensions
By default, the required extensions are enabled when you enable Defender for Con
| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters | ||-||
-| Release state: | ΓÇó Defender profile: GA<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
+| Release state: | ΓÇó Defender agent: GA<br> ΓÇó Azure Policy for Kubernetes : Generally available (GA) | ΓÇó Defender agent: Preview<br> ΓÇó Azure Policy for Kubernetes : Preview |
| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | | Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) |
-| Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) |
+| Supported destinations: | The AKS Defender agent only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) |
| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
-| Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|
+| Clouds: | **Defender agent**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy for Kubernetes **:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|**Defender agent**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy for Kubernetes**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|
Learn more about the [roles used to provision Defender for Containers extensions](permissions.md#roles-used-to-automatically-provision-agents-and-extensions).
Learn more about:
- [Setting up email notifications](configure-email-notifications.md) for security alerts - Protecting workloads with [the Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)--
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Defender for Cloud assesses the configuration of your resources to identify secu
In addition to the built-in roles, there are two roles specific to Defender for Cloud:
-* **Security Reader**: A user that belongs to this role has read-only access to Defender for Cloud. The user can view recommendations, alerts, a security policy, and security states, but can't make changes.
-* **Security Admin**: A user that belongs to this role has the same access as the Security Reader and can also update the security policy, dismiss alerts and recommendations, and apply recommendations.
+- **Security Reader**: A user that belongs to this role has read-only access to Defender for Cloud. The user can view recommendations, alerts, a security policy, and security states, but can't make changes.
+- **Security Admin**: A user that belongs to this role has the same access as the Security Reader and can also update the security policy, dismiss alerts and recommendations, and apply recommendations.
We recommend that you assign the least permissive role needed for users to complete their tasks. For example, assign the Reader role to users who only need to view information about the security health of a resource but not take action, such as applying recommendations or editing policies.
The specific role required to deploy monitoring components depends on the extens
## Roles used to automatically provision agents and extensions To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](../governance/policy/how-to/remediate-resources.md). To use remediation, Defender for Cloud needs to create service principals, also called managed identities that assign roles at the subscription level. For example, the service principals for the Defender for Containers plan are:
-
+ | Service Principal | Roles | |:-|:-| | Defender for Containers provisioning AKS Security Profile | ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Log Analytics Contributor | | Defender for Containers provisioning Arc-enabled Kubernetes | ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Log Analytics Contributor |
-| Defender for Containers provisioning Azure Policy Addon for Kubernetes | ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Azure Kubernetes Service Contributor |
+| Defender for Containers provisioning Azure Policy for Kubernetes | ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Azure Kubernetes Service Contributor |
| Defender for Containers provisioning Policy extension for Arc-enabled Kubernetes | ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor | ## Next steps+ This article explained how Defender for Cloud uses Azure RBAC to assign permissions to users and identified the allowed actions for each role. Now that you're familiar with the role assignments needed to monitor the security state of your subscription, edit security policies, and apply recommendations, learn how to: - [Set security policies in Defender for Cloud](tutorial-security-policy.md)
defender-for-cloud Plan Multicloud Security Determine Data Residency Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-data-residency-requirements.md
Defender for Containers has both agent-based and agentless components.
- **Agentless collection of Kubernetes audit log data**: [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) or GCP Cloud Logging enables and collects audit log data, and sends the collected information to Defender for Cloud for further analysis. Data storage is based on the EKS cluster AWS region, in accordance with GDPR - EU and US. - **Agent-based Azure Arc-enabled Kubernetes**: Connects your EKS and GKE clusters to Azure using [Azure Arc agents](../azure-arc/kubernetes/conceptual-agent-overview.md), so that theyΓÇÖre treated as Azure Arc resources.-- **Microsoft Defender extension**: A DaemonSet that collects signals from hosts using eBPF technology, and provides runtime protection. The extension is registered with a Log Analytics workspace and used as a data pipeline. The audit log data isn't stored in the Log Analytics workspace.-- **Azure Policy extension**: configuration information is collected by the Azure Policy add-on.
- - The Azure Policy add-on extends the open-source Gatekeeper v3 admission controller webhook for Open Policy Agent.
+- **[Defender agent](defender-for-cloud-glossary.md#defender-agent)**: A DaemonSet that collects signals from hosts using eBPF technology, and provides runtime protection. The extension is registered with a Log Analytics workspace and used as a data pipeline. The audit log data isn't stored in the Log Analytics workspace.
+- **Azure Policy for Kubernetes**: configuration information is collected by Azure Policy for Kubernetes.
+ - Azure Policy for Kubernetes extends the open-source Gatekeeper v3 admission controller webhook for Open Policy Agent.
- The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcement, safeguarding your clusters in a centralized, consistent manner. ## Defender for Databases plan
When it comes to the actual AWS and GCP resources that are protected by Defender
## Next steps
-In this article, you have learned how to determine your data residency requirements when designing a multicloud security solution. Continue with the next step to [determine compliance requirements](plan-multicloud-security-determine-compliance-requirements.md).
+In this article, you have learned how to determine your data residency requirements when designing a multicloud security solution. Continue with the next step to [determine compliance requirements](plan-multicloud-security-determine-compliance-requirements.md).
defender-for-cloud Plan Multicloud Security Determine Multicloud Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md
The following table summarizes agent requirements for CWPP.
|Microsoft Defender for Endpoint extension |Γ£ö| |Vulnerability assessment| Γ£ö| | |Log Analytics or Azure Monitor Agent (preview) extension|Γ£ö| |Γ£ö|
-|Defender profile| | Γ£ö| |
-|Azure policy extension | | Γ£ö| |
+|Defender agent| | Γ£ö| |
+|Azure Policy for Kubernetes | | Γ£ö| |
|Kubernetes audit log data | | Γ£ö| | |SQL servers on machines | | | Γ£ö| |Automatic SQL server discovery and registration | | | Γ£ö|
Enabling Defender for Containers provides GKE and EKS clusters and underlying ho
The required [components](./defender-for-containers-introduction.md) are as follows: -- **Azure Arc Agent**: Connects your GKE and EKS clusters to Azure, and onboards the Defender Profile.-- **Defender Profile**: Provides host-level runtime threat protection. -- **Azure Policy extension**: Extends the Gatekeeper v3 to monitor every request to the Kubernetes API server, and ensures that security best practices are being followed on clusters and workloads.
+- **Azure Arc Agent**: Connects your GKE and EKS clusters to Azure, and onboards the Defender agent.
+- **[Defender agent](defender-for-cloud-glossary.md#defender-agent)**: Provides host-level runtime threat protection.
+- **Azure Policy for Kubernetes**: Extends the Gatekeeper v3 to monitor every request to the Kubernetes API server, and ensures that security best practices are being followed on clusters and workloads.
- **Kubernetes audit logs**: Audit logs from the API server allow Defender for Containers to identify suspicious activity within your multicloud servers, and provide deeper insights while investigating alerts. Sending of the ΓÇ£Kubernetes audit logsΓÇ¥ needs to be enabled on the connector level. #### Check networking requirements-Defender for Containers
-Make sure to check that your clusters meet network requirements so that the Defender Profile can connect with Defender for Cloud.
+Make sure to check that your clusters meet network requirements so that the Defender agent can connect with Defender for Cloud.
### Defender for SQL
To receive the full benefits of Defender for SQL on your multicloud workload, yo
## Next steps
-In this article, you have learned how to determine multicloud dependencies when designing a multicloud security solution. Continue with the next step to [automate connector deployment](plan-multicloud-security-automate-connector-deployment.md).
+In this article, you have learned how to determine multicloud dependencies when designing a multicloud security solution. Continue with the next step to [automate connector deployment](plan-multicloud-security-automate-connector-deployment.md).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Microsoft Defender for Containers brings threat detection and advanced defenses
> - Defender for Containers when deployed on GCP, may incur external costs such as [logging costs](https://cloud.google.com/stackdriver/pricing), [pub/sub costs](https://cloud.google.com/pubsub/pricing) and [egress costs](https://cloud.google.com/vpc/network-pricing#:~:text=Platform%20SKUs%20apply.-%2cInternet%20egress%20rates%2c-Premium%20Tier%20pricing). - **Kubernetes audit logs to Defender for Cloud**: Enabled by default. This configuration is available at the GCP project level only. It provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud back end for further analysis.-- **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension**: Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three ways:
+- **Azure Arc-enabled Kubernetes, the Defender agent, and Azure Policy for Kubernetes**: Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three ways:
- Enable Defender for Containers autoprovisioning at the project level, as explained in the instructions in this section. We recommend this method. - Use Defender for Cloud recommendations for per-cluster installation. They appear on the Microsoft Defender for Cloud recommendations page. [Learn how to deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - Manually install [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md) and [extensions](../azure-arc/kubernetes/extensions.md).
defender-for-cloud Recommendations Reference Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md
impact on your secure score.
### Data plane recommendations
-All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling the Azure policy extension](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
+All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
## <a name='recs-aws-data'></a> AWS Data recommendations
defender-for-cloud Recommendations Reference Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-gcp.md
impact on your secure score.
### Data plane recommendations
-All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under GCP after [enabling the Azure policy extension](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
+All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under GCP after [enabling Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
## <a name='recs-gcp-data'></a> GCP Data recommendations
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
This article lists the recommendations you might see in Microsoft Defender for C
shown in your environment depend on the resources you're protecting and your customized configuration.
-Recommendations in Defender for Cloud are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
-the Microsoft cloud security benchmark is the Microsoft-authored set of guidelines for security
-and compliance best practices based on common compliance frameworks. This widely respected benchmark
-builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/)
-and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on
+Recommendations in Defender for Cloud are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+the Microsoft cloud security benchmark is the Microsoft-authored set of guidelines for security
+and compliance best practices based on common compliance frameworks. This widely respected benchmark
+builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/)
+and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on
cloud-centric security. To learn about how to respond to these recommendations, see
impact on your secure score.
(Preview) API Management minimum API version should be set to 2019-12-01 or higher|To prevent service secrets from being shared with read-only users, the minimum API version should be set to 2019-12-01 or higher.|Medium (Preview) API Management calls to API backends should be authenticated|Calls from API Management to backends should use some form of authentication, whether via certificates or credentials. Does not apply to Service Fabric backends.|Medium
+## AI recommendations
+
+| Recommendation | Description & related policy | Severity |
+| | | -- |
+| Resource logs in Azure Machine Learning Workspaces should be enabled (Preview) | Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised. | Medium |
+| Azure Machine Learning Workspaces should disable public network access (Preview) | Disabling public network access improves security by ensuring that the Machine Learning Workspaces aren't exposed on the public internet. You can control exposure of your workspaces by creating private endpoints instead. For more information, see [Configure a private endpoint for an Azure Machine Learning workspace](/azure/machine-learning/how-to-configure-private-link). | Medium |
+| Azure Machine Learning Computes should be in a virtual network (Preview) | Azure Virtual Networks provide enhanced security and isolation for your Azure Machine Learning Compute Clusters and Instances, as well as subnets, access control policies, and other features to further restrict access. When a compute is configured with a virtual network, it is not publicly addressable and can only be accessed from virtual machines and applications within the virtual network. | Medium |
+| Azure Machine Learning Computes should have local authentication methods disabled (Preview) | Disabling local authentication methods improves security by ensuring that Machine Learning Computes require Azure Active Directory identities exclusively for authentication. For more information, see [Azure Policy Regulatory Compliance controls for Azure Machine Learning](/azure/machine-learning/security-controls-policy). | Medium |
+| Azure Machine Learning compute instances should be recreated to get the latest software updates (Preview) | Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, see [Vulnerability management for Azure Machine Learning](/azure/machine-learning/concept-vulnerability-management#compute-instance). | Medium |
+| Resource logs in Azure Databricks Workspaces should be enabled (Preview) | Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised. | Medium |
+| Azure Databricks Workspaces should disable public network access (Preview) | Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can control exposure of your resources by creating private endpoints instead. For more information, see [Enable Azure Private Link](/azure/databricks/administration-guide/cloud-configurations/azure/private-link). | Medium |
+| Azure Databricks Clusters should disable public IP (Preview) | Disabling public IP of clusters in Azure Databricks Workspaces improves security by ensuring that the clusters aren't exposed on the public internet. For more information, see [Secure cluster connectivity](/azure/databricks/security/network/secure-cluster-connectivity). | Medium |
+| Azure Databricks Workspaces should be in a virtual network (Preview) | Azure Virtual Networks provide enhanced security and isolation for your Azure Databricks Workspaces, as well as subnets, access control policies, and other features to further restrict access. For more information, see [Deploy Azure Databricks in your Azure virtual network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). | Medium |
+| Azure Databricks Workspaces should use private link (Preview) | Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Databricks workspaces, you can reduce data leakage risks. For more information, see [Create the workspace and private endpoints in the Azure portal UI](/azure/databricks/administration-guide/cloud-configurations/azure/private-link-standard#create-the-workspace-and-private-endpoints-in-the-azure-portal-ui). | Medium |
+ ## Deprecated recommendations |Recommendation|Description & related policy|Severity|
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
The new security agent is a Kubernetes DaemonSet, based on eBPF technology and i
The security agent enablement is available through auto-provisioning, recommendations flow, AKS RP or at scale using Azure Policy.
-You can [deploy the Defender profile](./defender-for-containers-enable.md?pivots=defender-for-container-aks&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#deploy-the-defender-profile) today on your AKS clusters.
+You can [deploy the Defender agent](./defender-for-containers-enable.md?pivots=defender-for-container-aks&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#deploy-the-defender-agent) today on your AKS clusters.
With this announcement, the runtime protection - threat detection (workload) is now also generally available.
Updates in May include:
- [Multicloud settings of Servers plan are now available in connector level](#multicloud-settings-of-servers-plan-are-now-available-in-connector-level) - [JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)](#jit-just-in-time-access-for-vms-is-now-available-for-aws-ec2-instances-preview)-- [Add and remove the Defender profile for AKS clusters using the CLI](#add-and-remove-the-defender-profile-for-aks-clusters-using-the-cli)
+- [Add and remove the Defender agent for AKS clusters using the CLI](#add-and-remove-the-defender-agent-for-aks-clusters-using-the-cli)
### Multicloud settings of Servers plan are now available in connector level
When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatical
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
-### Add and remove the Defender profile for AKS clusters using the CLI
+### Add and remove the Defender agent for AKS clusters using the CLI
-The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster.
+The [Defender agent](defender-for-cloud-glossary.md#defender-agent) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender agent](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-agent) for an AKS cluster.
> [!NOTE] > This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli).
Learn more about this scanner in [Use Azure Defender for container registries to
Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new [extensions capabilities](../azure-arc/kubernetes/extensions.md).
-When you've enabled Azure Arc on your non-Azure Kubernetes clusters, a new recommendation from Azure Security Center offers to deploy the Azure Defender extension to them with only a few clicks.
+When you've enabled Azure Arc on your non-Azure Kubernetes clusters, a new recommendation from Azure Security Center offers to deploy the Azure Defender agent to them with only a few clicks.
Use the recommendation (**Azure Arc-enabled Kubernetes clusters should have Azure Defender's extension installed**) and the extension to protect Kubernetes clusters deployed in other cloud providers, although not on their managed Kubernetes services. This integration between Azure Security Center, Azure Defender, and Azure Arc-enabled Kubernetes brings: -- Easy provisioning of the Azure Defender extension to unprotected Azure Arc-enabled Kubernetes clusters (manually and at-scale)-- Monitoring of the Azure Defender extension and its provisioning state from the Azure Arc Portal
+- Easy provisioning of the Azure Defender agent to unprotected Azure Arc-enabled Kubernetes clusters (manually and at-scale)
+- Monitoring of the Azure Defender agent and its provisioning state from the Azure Arc Portal
- Security recommendations from Security Center are reported in the new Security page of the Azure Arc Portal - Identified security threats from Azure Defender are reported in the new Security page of the Azure Arc Portal - Azure Arc-enabled Kubernetes clusters are integrated into the Azure Security Center platform and experience Learn more in [Use Azure Defender for Kubernetes with your on-premises and multicloud Kubernetes clusters](defender-for-kubernetes-azure-arc.md). ### Microsoft Defender for Endpoint integration with Azure Defender now supports Windows Server 2019 and Windows 10 on Windows Virtual Desktop released for general availability (GA)
We're happy to announce the general availability (GA) of the set of recommendati
To ensure that Kubernetes workloads are secure by default, Security Center has added Kubernetes level hardening recommendations, including enforcement options with Kubernetes admission control.
-When the Azure Policy add-on for Kubernetes is installed on your Azure Kubernetes Service (AKS) cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices - displayed as 13 security recommendations - before being persisted to the cluster. You can then configure to enforce the best practices and mandate them for future workloads.
+When Azure Policy for Kubernetes is installed on your Azure Kubernetes Service (AKS) cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices - displayed as 13 security recommendations - before being persisted to the cluster. You can then configure to enforce the best practices and mandate them for future workloads.
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
As Azure Security Center grows, more extensions have been developed and Security
You can now configure the auto provisioning of: - Log Analytics agent-- (New) Azure Policy Add-on for Kubernetes
+- (New) Azure Policy for Kubernetes
- (New) Microsoft Dependency agent Learn more in [Auto provisioning agents and extensions from Azure Security Center](monitoring-components.md).
Learn more in [Connect your AWS accounts to Azure Security Center](quickstart-on
To ensure that Kubernetes workloads are secure by default, Security Center is adding Kubernetes level hardening recommendations, including enforcement options with Kubernetes admission control.
-When you've installed the Azure Policy add-on for Kubernetes on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to enforce the best practices and mandate them for future workloads.
+When you've installed Azure Policy for Kubernetes on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to enforce the best practices and mandate them for future workloads.
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
This article summarizes support information for the [Defender for Containers pla
| Compliance-Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet | | [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment (powered by Qualys) - running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment (powered by Qualys) - running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender agent | Defender for Containers | Commercial clouds |
| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - registry scan | ACR, Private ACR | Preview | | Agentless | Defender for Containers | Commercial clouds |
-| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - running images | AKS | Preview | | Defender profile | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - running images | AKS | Preview | | Defender agent | Defender for Containers | Commercial clouds |
| [Hardening (control plane)](defender-for-containers-architecture.md) | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | [Hardening (Kubernetes data plane)](kubernetes-workload-protections.md) | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government,Azure operated by 21Vianet | | [Runtime threat detection](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Runtime threat detection (workload) | AKS | GA | - | Defender profile | Defender for Containers | Commercial clouds |
+| Runtime threat detection (workload) | AKS | GA | - | Defender agent | Defender for Containers | Commercial clouds |
| Discovery/provisioning-Unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | Discovery/provisioning-Collecting control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Discovery/provisioning-Defender profile auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Discovery/provisioning-Azure policy add-on auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Discovery/provisioning-Defender agent auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Discovery/provisioning-Azure Policy for Kubernetes auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
### Registries and images support for AKS - powered by Qualys
This article summarizes support information for the [Defender for Containers pla
### Private link restrictions
-Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the [Defender agent](defender-for-cloud-glossary.md#defender-agent) for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
:::image type="content" source="media/supported-machines-endpoint-solutions-cloud-containers/network-access.png" alt-text="Screenshot that shows where to go to turn off data ingestion.":::
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Vulnerability Assessment | Registry scan | ECR | Preview | - | Agentless | Defender for Containers | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy for Kubernetes | Defender for Containers |
| Runtime protection| Threat detection (control plane)| EKS | Preview | Preview | Agentless | Defender for Containers |
-| Runtime protection| Threat detection (workload) | EKS | Preview | - | Defender extension | Defender for Containers |
+| Runtime protection| Threat detection (workload) | EKS | Preview | - | Defender agent | Defender for Containers |
| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | - | Agentless | Free | | Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
+| Discovery and provisioning | Auto provisioning of Defender agent | - | - | - | - | - |
+| Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | - | - | - | - | - |
### Images support-EKS
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
### Private link restrictions
-Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender agent for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
:::image type="content" source="media/supported-machines-endpoint-solutions-cloud-containers/network-access.png" alt-text="Screenshot that shows where to go to turn off data ingestion.":::
Outbound proxy without authentication and outbound proxy with basic authenticati
| Vulnerability Assessment | Registry scan | - | - | - | - | - | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | GKE | GA | GA | Agentless | Free |
-| Hardening | Kubernetes data plane recommendations | GKE | Preview | - | Azure Policy extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | GKE | Preview | - | Azure Policy for Kubernetes | Defender for Containers |
| Runtime protection| Threat detection (control plane)| GKE | Preview | Preview | Agentless | Defender for Containers |
-| Runtime protection| Threat detection (workload) | GKE | Preview | - | Defender extension | Defender for Containers |
+| Runtime protection| Threat detection (workload) | GKE | Preview | - | Defender agent | Defender for Containers |
| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | - | Agentless | Free | | Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | - | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | - | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender agent | GKE | Preview | - | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | GKE | Preview | - | Agentless | Defender for Containers |
### Kubernetes distributions/configurations support-GKE
Outbound proxy without authentication and outbound proxy with basic authenticati
### Private link restrictions
-Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender agent for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
:::image type="content" source="media/supported-machines-endpoint-solutions-cloud-containers/network-access.png" alt-text="Screenshot that shows where to go to turn off data ingestion.":::
Outbound proxy without authentication and outbound proxy with basic authenticati
| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-support--on-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy extension | Defender for Containers |
-| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
-| Runtime protection for [supported OS](#registries-and-images-support--on-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy for Kubernetes | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers |
+| Runtime protection for [supported OS](#registries-and-images-support--on-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender agent | Defender for Containers |
| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
-| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
+| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender agent | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
### Registries and images support -on-premises
Outbound proxy without authentication and outbound proxy with basic authenticati
#### Supported host operating systems
-Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
+Defender for Containers relies on the **Defender agent** for several features. The Defender agent is supported on the following host operating systems:
- Amazon Linux 2 - CentOS 8
Ensure your Kubernetes node is running on one of the verified supported operatin
##### Private link
-Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender agent for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
:::image type="content" source="media/supported-machines-endpoint-solutions-cloud-containers/network-access.png" alt-text="Screenshot that shows where to go to turn off data ingestion.":::
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Common connector issues:
- Connector resource should be present in Azure Resource Graph (ARG). Use the following ARG query to check: `resources | where ['type'] =~ "microsoft.security/securityconnectors"` - Make sure that sending Kubernetes audit logs is enabled on the AWS or GCP connector so that you can get [threat detection alerts for the control plane](alerts-reference.md#alerts-k8scluster). - Make sure that Azure Arc and the Azure Policy Arc extension were installed successfully.-- Make sure that the agent is installed to your Elastic Kubernetes Service (EKS) clusters. You can install the agent with the **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters** recommendation, or **Azure policy extension for Kubernetes should be installed and enabled on your clusters** recommendations. Download the given script provided in the recommendation and run it on your cluster. The recommendation should disappear within an hour of when the script is run.
+- Make sure that agents are installed to your Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations:
+ - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed**
+ - **GKE clusters should have the Azure Policy extension installed**
+ - **EKS clusters should have Microsoft Defender's extension for Azure Arc installed**
+ - **GKE clusters should have Microsoft Defender's extension for Azure Arc installed**
- If youΓÇÖre experiencing issues with deleting the AWS or GCP connector, check if you have a lock (in this case there might be an error in the Azure Activity log, hinting at the presence of a lock). - Check that workloads exist in the AWS account or GCP project.
AWS connector issues:
- Make sure that EKS clusters are successfully connected to Arc-enabled Kubernetes. - If you don't see AWS data in Defender for Cloud, make sure that the AWS resources required to send data to Defender for Cloud exist in the AWS account.
-GCP connector issues:
+GCP connector issues:
- Make sure that the GCP Cloud Shell script completed successfully. - Make sure that GKE clusters are successfully connected to Arc-enabled Kubernetes.
defender-for-cloud Tutorial Enable Container Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md
To protect your EKS clusters, you need to enable the Containers plan on the rele
> [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md).
-## Deploy the Defender extension in Azure
+## Deploy the Defender agent in Azure
-Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension should be installed and running on your EKS clusters. There's a dedicated Defender for Cloud recommendation that can be used to install these extensions (and Azure Arc if necessary):
+Azure Arc-enabled Kubernetes, the Defender agent, and Azure Policy for Kubernetes should be installed and running on your EKS clusters. There's a dedicated Defender for Cloud recommendation that can be used to install these extensions (and Azure Arc if necessary):
- `EKS clusters should have Microsoft Defender's extension for Azure Arc installed`
defender-for-cloud Tutorial Enable Container Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md
You can learn more about Defender for Container's pricing on the [pricing page](
## Deploy the solution to specific clusters
-If you disabled any of the default auto provisioning configurations to Off, during the [GCP connector onboarding process](quickstart-onboard-gcp.md#configure-the-defender-for-containers-plan), or afterwards. You need to manually install Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extensions to each of your GKE clusters to get the full security value out of Defender for Containers.
+If you disabled any of the default auto provisioning configurations to Off, during the [GCP connector onboarding process](quickstart-onboard-gcp.md#configure-the-defender-for-containers-plan), or afterwards. You need to manually install Azure Arc-enabled Kubernetes, the Defender agent, and Azure Policy for Kubernetes to each of your GKE clusters to get the full security value out of Defender for Containers.
There are two dedicated Defender for Cloud recommendations you can use to install the extensions (and Arc if necessary):
defender-for-cloud Tutorial Enable Containers Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-arc.md
You can learn more about Defender for Container's pricing on the [pricing page](
- Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/network-requirements.md) are validated and [connect the Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). -- Validate the following endpoints are configured for outbound access so that the Defender extension can connect to Microsoft Defender for Cloud to send security data and events:
+- Validate the following endpoints are configured for outbound access so that the Defender agent can connect to Microsoft Defender for Cloud to send security data and events:
| Domain | Port | | -- | - |
If you would prefer to [assign a custom workspace](defender-for-containers-enabl
> [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md).
-## Deploy the Defender extension on Arc-enabled Kubernetes clusters that were onboarded to an Azure subscription
+## Deploy the Defender agent on Arc-enabled Kubernetes clusters that were onboarded to an Azure subscription
-You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender extension](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc&tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api#deploy-the-defender-extension) with REST API, Azure CLI or with a Resource Manager template.
+You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender agent](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc&tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api#deploy-the-defender-agent) with REST API, Azure CLI or with a Resource Manager template.
-**To deploy the Defender profile in Azure:**
+**To deploy the Defender agent in Azure:**
1. Sign in to the [Azure portal](https://portal.azure.com).
You can enable the Defender for Containers plan and deploy all of the relevant c
1. Search for and select the `Azure Arc-enabled Kubernetes clusters should have Defender for Cloud's extension installed` recommendation.
- :::image type="content" source="media/tutorial-enable-containers-azure/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/tutorial-enable-containers-azure/extension-recommendation.png":::
+ :::image type="content" source="media/tutorial-enable-containers-azure/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender agent for Azure Arc-enabled Kubernetes clusters." lightbox="media/tutorial-enable-containers-azure/extension-recommendation.png":::
1. Select all of the relevant affected resources.
defender-for-cloud Tutorial Enable Containers Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-azure.md
You can learn more about Defender for Container's pricing on the [pricing page](
- You must [enable Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription. -- Ensure the [required Fully Qualified Domain Names (FQDN)/application](../aks/limit-egress-traffic.md) endpoints are configured for outbound access so that the Defender profile can connect to Microsoft Defender for Cloud to send security data and events.
+- Ensure the [required Fully Qualified Domain Names (FQDN)/application](../aks/limit-egress-traffic.md) endpoints are configured for outbound access so the Defender agent can connect to Microsoft Defender for Cloud to send security data and events.
> [!Note] > By default, AKS clusters have unrestricted outbound (egress) internet access.
If you would prefer to [assign a custom workspace](/azure/defender-for-cloud/def
1. Select **Save**.
+## Deploy the Defender agent in Azure
+ > [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md).
-## Deploy the Defender profile in Azure
-
-You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender profile](defender-for-containers-enable.md#deploy-the-defender-profile) with REST API, Azure CLI or with a Resource Manager template.
+You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender agent](defender-for-containers-enable.md#deploy-the-defender-agent) with REST API, Azure CLI or with a Resource Manager template.
-**To deploy the Defender profile in Azure:**
+**To deploy the Defender agent in Azure:**
1. Sign in to the [Azure portal](https://portal.azure.com).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 08/08/2023 Last updated : 08/14/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 | | [Update naming format of Azure Center for Internet Security standards in regulatory compliance](#update-naming-format-of-azure-center-for-internet-security-standards-in-regulatory-compliance) | August 2023 | | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 |
+| [Deprecate and replace recommendations App Service Client Certificates](#deprecate-and-replace-recommendations-app-service-client-certificates) | August 2023 |
+| [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | September 2023 |
| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | August 2024 |
+### Classic connectors for multicloud will be retired
+
+**Estimated date for change: September 15, 2023**
+
+The classic multicloud connectors will be retiring on September 15, 2023 and no data will be streamed to them after this date. These classic connectors were used to connect AWS Security Hub and GCP Security Command Center recommendations to Defender for Cloud and onboard AWS EC2s to Defender for Servers.
+
+The full value of these connectors has been replaced with the native multicloud security connectors experience, which has been Generally Available for AWS and GCP since March 2022 at no additional cost.
+
+The new native connectors are included in your plan and offer an automated onboarding experience with options to onboard single accounts, multiple accounts (with Terraform), and organizational onboarding with auto provisioning for the following Defender plans: free foundational CSPM capabilities, Defender Cloud Security Posture Management (CSPM), Defender for Servers, Defender for SQL, and Defender for Containers.
+
+If you're currently using the classic multicloud connectors, we strongly recommend that you begin your migration to the native security connectors before September 15, 2023.
+
+How to migrate to the native security connectors:
+
+- [Connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md)
+- [Connect your GCP project to Defender for Cloud](quickstart-onboard-gcp.md)
+ ### Defender for Cloud plan and strategy for the Log Analytics agent deprecation **Estimated date for change: August 2024**
The following table lists the alerts to be deprecated:
| Anonymity network activity (Preview) | DNS_DarkWeb | | Anonymity network activity using web proxy (Preview) | DNS_DarkWebProxy |
+### Deprecate and replace recommendations App Service Client Certificates
+
+**Estimated date for change: August 2023**
+
+App Service policies are set to be deprecated and replaced so that they only monitor apps using HTTP 1.1 since HTTP 2.0 on App Service doesn't support client certificates. The existing policies that enforce client certificates require an additional check to determine if Http 2.0 is being used by the app. Adding this additional check requires a change to the policy "effect" from Audit to AuditIfNotExists. Policy "effect" changes require deprecation of the old version of the policy and the creation of a replacement.
+
+Policies in this scope:
+
+- App Service apps should have Client Certificates (Incoming client certificates) enabled
+- App Service app slots should have Client Certificates (Incoming client certificates) enabled
+- Function apps should have Client Certificates (Incoming client certificates) enabled
+- Function app slots should have Client Certificates (Incoming client certificates) enabled
+
+Customers who are currently using this policy will need to ensure they have the new policies with similar names enabled and assigned to their intended scope.
+ ### Change to the Log Analytics daily cap Azure monitor offers the capability to [set a daily cap](../azure-monitor/logs/daily-cap.md) on the data that is ingested on your Log analytics workspaces. However, Defender for Cloud security events are currently not supported in those exclusions.
defender-for-cloud View And Remediate Vulnerabilities For Images Running On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images-running-on-aks.md
Last updated 07/11/2023
Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462ce) recommendation.
-To provide findings for the recommendation, Defender CSPM uses [agentless container registry vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) or the [Defender Container agent](tutorial-enable-containers-azure.md#deploy-the-defender-profile-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
+To provide findings for the recommendation, Defender CSPM uses [agentless container registry vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) or the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
For more information, see [Securing IoT devices in the enterprise](concept-enter
## Managing OT alerts in a hybrid environment
-Users working in hybrid environments may be managing OT alerts in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console.
+Users working in hybrid environments may be managing OT alerts in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console.
Alert statuses are fully synchronized between the Azure portal and the OT sensor, and between the sensor and the on-premises management console. This means that regardless of where you manage the alert in Defender for IoT, the alert is updated in other locations as well.
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
Title: CLI command reference from OT network sensors- Microsoft Defender for IoT description: Learn about the CLI commands available from Microsoft Defender for IoT OT network sensors. Previously updated : 07/04/2023 Last updated : 08/09/2023
Health checks are also available from the OT sensor console. For more informatio
|User |Command |Full command syntax | |||| |**support** | `system sanity` | No attributes |
-|**cyberx** | `cyberx-xsense-sanity` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-sanity` | No attributes |
The following example shows the command syntax and response for the *support* user:
Use the following commands to restart the OT sensor appliance.
|User |Command |Full command syntax | |||| |**support** | `system reboot` | No attributes |
-|**cyberx** | `sudo reboot` | No attributes |
-|**cyberx_host** | `sudo reboot` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `sudo reboot` | No attributes |
+|**cyberx_host** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `sudo reboot` | No attributes |
For example, for the *support* user:
Use the following commands to shut down the OT sensor appliance.
|User |Command |Full command syntax | |||| |**support** | `system shutdown` | No attributes |
-|**cyberx** | `sudo shutdown -r now` | No attributes |
-|**cyberx_host** | `sudo shutdown -r now` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `sudo shutdown -r now` | No attributes |
+|**cyberx_host**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `sudo shutdown -r now` | No attributes |
For example, for the *support* user:
root@xsense: system shutdown
``` ### Software versions+ #### Show installed software version Use the following commands to list the Defender for IoT software version installed on your OT sensor.
Use the following commands to list the Defender for IoT software version install
|User |Command |Full command syntax | |||| |**support** | `system version` | No attributes |
-|**cyberx** | `cyberx-xsense-version` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-version` | No attributes |
For example, for the *support* user:
Version: 22.2.5.9-r-2121448
For more information, see [Update your sensors](update-ot-software.md#update-ot-sensors). ### Date, time, and NTP+ #### Show current system date/time Use the following commands to show the current system date and time on your OT network sensor, in GMT format.
Use the following commands to show the current system date and time on your OT n
|User |Command |Full command syntax | |||| |**support** | `date` | No attributes |
-|**cyberx** | `date` | No attributes |
-|**cyberx_host** | `date` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `date` | No attributes |
+|**cyberx_host** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `date` | No attributes |
For example, for the *support* user:
To use these commands, make sure that:
|User |Command |Full command syntax | |||| |**support** | `ntp enable <IP address>` | No attributes |
-|**cyberx** | `cyberx-xsense-ntp-enable <IP address>` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-ntp-enable <IP address>` | No attributes |
In these commands, `<IP address>` is the IP address of a valid IPv4 NTP server using port 123.
Use the following commands to turn off the synchronization for the appliance tim
|User |Command |Full command syntax | |||| |**support** | `ntp disable <IP address>` | No attributes |
-|**cyberx** | `cyberx-xsense-ntp-disable <IP address>` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-ntp-disable <IP address>` | No attributes |
In these commands, `<IP address>` is the IP address of a valid IPv4 NTP server using port 123.
Use the following commands to list the backup files currently stored on your OT
|User |Command |Full command syntax | |||| |**support** | `system backup-list` | No attributes |
-|**cyberx** | ` cyberx-xsense-system-backup-list` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-backup-list` | No attributes |
For example, for the *support* user:
Use the following commands to start an immediate, unscheduled backup of the data
|User |Command |Full command syntax | |||| |**support** | `system backup` | No attributes |
-|**cyberx** | ` cyberx-xsense-system-backup` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-backup` | No attributes |
For example, for the *support* user:
Use the following commands to restore data on your OT network sensor using the m
|User |Command |Full command syntax | |||| |**support** | `system restore` | No attributes |
-|**cyberx** | ` cyberx-xsense-system-restore` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | No attributes |
For example, for the *support* user:
The following command lists the current backup disk space allocation, including
|User |Command |Full command syntax | ||||
-|**cyberx** | ` cyberx-backup-memory-check` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-backup-memory-check` | No attributes |
For example, for the *cyberx* user:
For more information, see [Prepare CA-signed certificates](best-practices/plan-p
|User |Command |Full command syntax | ||||
-| **cyberx** | `cyberx-xsense-certificate-import` | cyberx-xsense-certificate-import [-h] [--crt &lt;PATH&gt;] [--key &lt;FILE NAME&gt;] [--chain &lt;PATH&gt;] [--pass &lt;PASSPHRASE&gt;] [--passphrase-set &lt;VALUE&gt;]`
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-certificate-import` | cyberx-xsense-certificate-import [-h] [--crt &lt;PATH&gt;] [--key &lt;FILE NAME&gt;] [--chain &lt;PATH&gt;] [--pass &lt;PASSPHRASE&gt;] [--passphrase-set &lt;VALUE&gt;]`
In this command:
Use the following command to restore the default, self-signed certificates on yo
|User |Command |Full command syntax | ||||
-|**cyberx** | `cyberx-xsense-create-self-signed-certificate` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-create-self-signed-certificate` | No attributes |
For example, for the *cyberx* user:
When you change the password for the *cyberx*, *support*, or *cyberx_host* user,
|User |Command |Full command syntax | ||||
-|**cyberx** | `cyberx-users-password-reset` | `cyberx-users-password-reset -u <user> -p <password>` |
-|**cyberx_host** | `passwd` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-users-password-reset` | `cyberx-users-password-reset -u <user> -p <password>` |
+|**cyberx_host**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `passwd` | No attributes |
The following example shows the *cyberx* user resetting the *support* user's password to `jI8iD9kE6hB8qN0h`:
Use the following command to rerun the OT monitoring software configuration wiza
|User |Command |Full command syntax | ||||
-|**cyberx_host** | `sudo dpkg-reconfigure iot-sensor` | No attributes |
+|**cyberx_host**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `sudo dpkg-reconfigure iot-sensor` | No attributes |
For example, with the **cyberx_host** user:
Use the following commands to send a ping message from the OT sensor.
|User |Command |Full command syntax | |||| |**support** | `ping <IP address>` | No attributes|
-|**cyberx** | `ping <IP address>` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `ping <IP address>` | No attributes |
In these commands, `<IP address>` is the IP address of a valid IPv4 network host accessible from the management port on your OT sensor.
Use the following command to display network traffic and bandwidth using a six-s
|User |Command |Full command syntax | ||||
-|**cyberx** | `cyberx-nload` | No attributes |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-nload` | No attributes |
```bash root@xsense:/# cyberx-nload
Use the following command to check the internet connectivity on your appliance.
|User |Command |Full command syntax | ||||
-|**cyberx** | `cyberx-xsense-internet-connectivity` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-internet-connectivity` | No attributes |
```bash root@xsense:/# cyberx-xsense-internet-connectivity
Setting outbound bandwidth limits can be helpful in maintaining networking quali
|User |Command |Full command syntax | ||||
-|**cyberx** | `cyberx-xsense-limit-interface` | `cyberx-xsense-limit-interface [-h] --interface <INTERFACE VALUE> [--limit <LIMIT VALUE] [--clear]` |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-limit-interface` | `cyberx-xsense-limit-interface [-h] --interface <INTERFACE VALUE> [--limit <LIMIT VALUE] [--clear]` |
In this command:
setting the bandwidth limit of interface "eth0" to 1000mbps
### Physical interfaces+ #### Locate a physical port by blinking interface lights Use the following command to locate a specific physical interface by causing the interface lights to blink.
Use the following commands to list the connected physical interfaces on your OT
|User |Command |Full command syntax | |||| |**support** | `network list` | No attributes |
-|**cyberx** | `ifconfig` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `ifconfig` | No attributes |
For example, for the *support* user:
Use the following commands to create a new capture filter:
|User |Command |Full command syntax | |||| | **support** | `network capture-filter` | No attributes.|
-| **cyberx** | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -m MODE [-S]` |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -m MODE [-S]` |
Supported attributes for the *cyberx* user are defined as follows:
To create a capture filter for *each* component, make sure to repeat the entire
|User |Command |Full command syntax | |||| | **support** | `network capture-filter` | No attributes.|
-| **cyberx** | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -p PROGRAM [-o BASE_HORIZON] [-s BASE_TRAFFIC_MONITOR] [-c BASE_COLLECTOR] -m MODE [-S]` |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -p PROGRAM [-o BASE_HORIZON] [-s BASE_TRAFFIC_MONITOR] [-c BASE_COLLECTOR] -m MODE [-S]` |
The following extra attributes are used for the *cyberx* user to create capture filters for each component separately:
Use the following commands to show details about the current capture filters con
|User |Command |Full command syntax | |||| | **support** | Use the following commands to view the capture filters for each component: <br><br>- **horizon**: `edit-config horizon_parser/horizon.properties` <br>- **traffic-monitor**: `edit-config traffic_monitor/traffic-monitor` <br>- **collector**: `edit-config dumpark.properties` | No attributes |
-| **cyberx** | Use the following commands to view the capture filters for each component: <br><br>-**horizon**: `nano /var/cyberx/properties/horizon_parser/horizon.properties` <br>- **traffic-monitor**: `nano /var/cyberx/properties/traffic_monitor/traffic-monitor.properties` <br>- **collector**: `nano /var/cyberx/properties/dumpark.properties` | No attributes |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | Use the following commands to view the capture filters for each component: <br><br>-**horizon**: `nano /var/cyberx/properties/horizon_parser/horizon.properties` <br>- **traffic-monitor**: `nano /var/cyberx/properties/traffic_monitor/traffic-monitor.properties` <br>- **collector**: `nano /var/cyberx/properties/dumpark.properties` | No attributes |
These commands open the following files, which list the capture filters configured for each component:
Use the following command to reset your sensor to the default capture configurat
|User |Command |Full command syntax | ||||
-| **cyberx** | `cyberx-xsense-capture-filter -p all -m all-connected` | No attributes |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-capture-filter -p all -m all-connected` | No attributes |
If you want to modify the existing capture filters, run the [earlier](#create-a-basic-filter-for-all-components) command again, with new attribute values.
root@xsense:/#
``` ## Alerts+ ### Trigger a test alert Use the following command to test connectivity and alert forwarding from the sensor to management consoles, including the Azure portal, a Defender for IoT on-premises management console, or a third-party SIEM. |User |Command |Full command syntax | ||||
-| **cyberx** | `cyberx-xsense-trigger-test-alert` | No attributes |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `cyberx-xsense-trigger-test-alert` | No attributes |
The following example shows the command syntax and response for the *cyberx* user:
Use the following command to display a list of currently configured exclusion ru
|User |Command |Full command syntax | |||| |**support** | `alerts exclusion-rule-list` | `alerts exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
-|**cyberx** | `alerts cyberx-xsense-exclusion-rule-list` | `alerts cyberx-xsense-exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+|**cyberx** , or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | `alerts cyberx-xsense-exclusion-rule-list` | `alerts cyberx-xsense-exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
The following example shows the command syntax and response for the *support* user:
Use the following commands to create a local alert exclusion rule on your sensor
|User |Command |Full command syntax | |||| | **support** | `cyberx-xsense-exclusion-rule-create` | `cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
-| **cyberx** |`cyberx-xsense-exclusion-rule-create` |`cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) |`cyberx-xsense-exclusion-rule-create` |`cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
Supported attributes are defined as follows:
Use the following commands to modify an existing local alert exclusion rule on y
|User |Command |Full command syntax | |||| | **support** | `exclusion-rule-append` | `exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
-| **cyberx** |`exclusion-rule-append` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) |`exclusion-rule-append` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
Supported attributes are defined as follows:
Use the following commands to delete an existing local alert exclusion rule on y
|User |Command |Full command syntax | |||| | **support** | `exclusion-rule-remove` | `exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
-| **cyberx** |`exclusion-rule-remove` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+| **cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) |`exclusion-rule-remove` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
Supported attributes are defined as follows:
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
While the number of IoT devices continues to grow, they often lack the security
## IoT security across Microsoft 365 Defender and Azure
-Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and [Azure portals](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started).
+Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and [Azure portals](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started).
[Add an Enterprise IoT plan](eiot-defender-for-endpoint.md) in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. The extra security value is provided for IoT devices detected by Defender for Endpoint.
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
Define a new setting whenever you want to define a specific configuration for on
**To define a new setting**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**.
1. On the **Sensor settings (Preview)** page, select **+ Add**, and then use the wizard to define the following values for your setting. Select **Next** when you're done with each tab in the wizard to move to the next step.
Your new setting is now listed on the **Sensor settings (Preview)** page under i
**To view the current settings already defined for your subscription**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**
The **Sensor settings (Preview)** page shows any settings already defined for your subscriptions, listed by setting type. Expand or collapse each type to view detailed configurations. For example:
defender-for-iot Faqs Eiot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-eiot.md
Enterprise IoT is designed to help customers secure un-managed devices throughou
For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md). -- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal.
+- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal.
For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
To make any changes to an existing plan, you'll need to cancel your existing pla
To remove only Enterprise IoT from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see [Cancel your Defender for IoT plan](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan).
-To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
+To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
## What happens when the 30-day trial ends?
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add an OT plan for Defender for IoT in the Azure
**To add an OT plan in Defender for IoT**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**.
1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan.
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
For more information, see [Azure user roles and permissions for Defender for IoT
## View alerts on the Azure portal
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid:
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid:
| Column | Description |--|--|
Supported grouping options include *Engine*, *Name*, *Sensor*, *Severity*, and *
## Manage alert severity and status
-We recommend that you update alert severity In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded.
+We recommend that you update alert severity In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded.
You can update both severity and status for a single alert or for a selection of alerts in bulk.
Downloading the PCAP file can take several minutes, depending on the quality of
You may want to export a selection of alerts to a CSV file for offline sharing and reporting.
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left.
1. Use the search box and filter options to show only the alerts you want to export.
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
# Manage your device inventory from the Azure portal
-Use the **Device inventory** page in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more.
+Use the **Device inventory** page in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more.
For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Take action by selecting the **Learn more** option under :::image type="icon" so
You may need to download software for your OT sensor if you're [installing Defender for IoT software](ot-deploy/install-software-ot-sensor.md) on your own appliances, or [updating software versions](update-ot-software.md).
-In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options:
+In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options:
- For a new installation, select **Getting started** > **Sensor**. Select a version in the **Purchase an appliance and install software** area, and then select **Download**.
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
This procedure describes how to add an OT plan for Defender for IoT in the Azure
**To add an OT plan in Defender for IoT**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**.
1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan. You can only add a single subscription, and you'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription.
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
Before performing the procedures in this article, make sure that you have:
You may need to download software for your on-premises management console if you're [installing Defender for IoT software](ot-deploy/install-software-on-premises-management-console.md) on your own appliances, or [updating software versions](update-ot-software.md).
-In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options:
+In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options:
- For a new installation or standalone update, select **Getting started** > **On-premises management console**.
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
To download a pre-defined SNMP MIB file from the Azure portal, you'll need acces
Defender for IoT in the Azure portal provides a downloadable MIB file for you to load into your SNMP monitoring system to pre-define Defender for IoT sensors.
-**To download the SNMP MIB file** from [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **More actions** > **Download SNMP MIB file**.
+**To download the SNMP MIB file** from [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **More actions** > **Download SNMP MIB file**.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
Ensure automatic package update by onboarding your cloud-connected sensor with t
**To change the update mode after you've onboarded your OT sensor**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change.
1. Select the options (**...**) menu for the selected OT sensor > **Edit**. 1. Toggle on or toggle off the **Automatic Threat Intelligence Updates** option as needed.
Your cloud-connected sensors can be automatically updated with threat intelligen
**To manually push updates to a single OT sensor**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update.
1. Select the options (**...**) menu for the selected sensor and then select **Push Threat Intelligence update**. The **Threat Intelligence update status** field displays the update progress. **To manually push updates to multiple OT sensors**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update.
1. Select **Threat intelligence updates (Preview)** > **Remote update**. The **Threat Intelligence update status** field displays the update progress for each selected sensor.
If you're also working with an on-premises management console, we recommend that
**To download threat intelligence packages**:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
1. In the **Sensor TI update** pane, select **Download** to download the latest threat intelligence file.
On each OT sensor, the threat intelligence update status and version information
For cloud-connected OT sensors, threat intelligence data is also shown in the **Sites and sensors** page. To view threat intelligence statues from the Azure portal:
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Site and sensors**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Site and sensors**.
1. Locate the OT sensors where you want to check the threat intelligence statues.
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
This procedure describes how to add an Enterprise IoT plan to your Azure subscri
:::image type="content" source="media/enterprise-iot/defender-for-endpoint-onboard.png" alt-text="Screenshot of the Enterprise IoT tab in Defender for Endpoint." lightbox="media/enterprise-iot/defender-for-endpoint-onboard.png":::
-After you've onboarded your plan, you'll see it listed in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example:
+After you've onboarded your plan, you'll see it listed in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example:
:::image type="content" source="media/enterprise-iot/eiot-plan-in-azure.png" alt-text="Screenshot of an Enterprise IoT plan showing in the Defender for IoT Plans and pricing page.":::
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
Title: CLI command users and access for OT monitoring - Microsoft Defender for IoT description: Learn about the users supported for the Microsoft Defender for IoT CLI commands and how to access the CLI. Previously updated : 01/01/2023 Last updated : 08/09/2023
To access the Defender for IoT CLI, sign in to your OT or Enterprise IoT sensor
Each CLI command on an OT network sensor or on-premises management console is supported a different set of privileged users, as noted in the relevant CLI descriptions. Make sure you sign in as the user required for the command you want to run. For more information, see [Privileged user access for OT monitoring](#privileged-user-access-for-ot-monitoring).
+## Access the system root as a *support* user
++
+When signing in as the *support* user, run the following command to access the host machine as the root user. Access the host machine as the root user enables you to run CLI commands that aren't available to the *support* user.
+
+Run:
+
+```support bash
+system shell
+```
+ ## Sign out of the CLI Make sure to properly sign out of the CLI when you're done using it. You're automatically signed out after an inactive period of 300 seconds.
defender-for-iot Update Legacy Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-legacy-ot-software.md
For more information, see [Versioning and support for on-premises software versi
**To update a legacy OT sensor version**
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row.
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This procedure describes how to send a software version update to one or more OT
### Send the software update to your OT sensor
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed.
If you know your site and sensor name, you can browse or search for it directly. Alternately, filter the sensors listed to show only cloud-connected, OT sensors that have *Remote updates supported*, and have legacy software version installed. For example:
This procedure describes how to manually download the new sensor software versio
### Download the update package from the Azure portal
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
1. In the **Local update** pane, select the software version that's currently installed on your sensors.
The software version on your on-premises management console must be equal to tha
> ### Download the update packages from the Azure portal
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
1. In the **Local update** pane, select the software version that's currently installed on your sensors.
This procedure describes how to update OT sensor software via the CLI, directly
### Download the update package from the Azure portal
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
1. In the **Local update** pane, select the software version that's currently installed on your sensors.
Updating an on-premises management console takes about 30 minutes.
This procedure describes how to download an update package for a standalone update. If you're updating your on-premises management console together with connected sensors, we recommend using the **[Update sensors (Preview)](#update-ot-sensors)** menu from on the **Sites and sensors** page instead.
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Getting started** > **On-premises management console**.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Getting started** > **On-premises management console**.
1. In the **On-premises management console** area, select the download scenario that best describes your update, and then select **Download**.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 07/31/2023 Last updated : 08/09/2023
In new sensor installations of version 23.1.2, only the privileged *support* use
In sensors that have been updated from previous versions to 23.1.2, the *cyberx* and *cyberx_host* users remain enabled as before.
+> [!TIP]
+> To run CLI commands that are available only to the *cyberx* or *cyberx_host* users when signed in as the *support* user, make sure to first access the host machine's system root. For more information, see [Access the system root as a *support* user](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user).
### Migrate to site-based licenses
firewall-manager Secured Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secured-virtual-hub.md
A *secured* virtual hub is an [Azure Virtual WAN Hub](../virtual-wan/virtual-wan
> [!IMPORTANT] > Currently, Azure Firewall in secured virtual hubs (vWAN) is not supported in Qatar and Poland Central.
-You can use a secured virtual hub to filter traffic between virtual networks (V2V), virtual networks and branch offices (B2V) and traffic to the Internet (B2I/V2I). A secured virtual hub provides automated routing. There's no need to configure your own UDRs (user defined routes) to route traffic through your firewall.
+You can use a secured virtual hub to filter traffic between virtual networks (V2V), branch-to-branch (B2B)<sup>*</sup>, branch offices (B2V) and traffic to the Internet (B2I/V2I). A secured virtual hub provides automated routing. There's no need to configure your own UDRs (user defined routes) to route traffic through your firewall.
-You can choose the required security providers to protect and govern your network traffic, including Azure Firewall, third-party security as a service (SECaaS) providers, or both. Currently, a secured hub doesnΓÇÖt support Branch to Branch (B2B) filtering and filtering across multiple hubs. To learn more, see [What is Azure Firewall Manager?](overview.md#known-issues).
+You can choose the required security providers to protect and govern your network traffic, including Azure Firewall, third-party security as a service (SECaaS) providers, or both. To learn more, see [What is Azure Firewall Manager?](overview.md#known-issues).
## Create a secured virtual hub Using Firewall Manager in the Azure portal, you can either create a new secured virtual hub, or convert an existing virtual hub that you previously created using Azure Virtual WAN.
-You may configure Virtual WAN to enable inter-region security use cases in the hub by configuring routing intent. For more information on routing intent, see [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
+ <sup>*</sup>Virtual WAN routing intent must be configured to secure inter-hub and branch-to-branch communications, even within a single Virtual WAN hub. For more information on routing intent, see the [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
## Next steps
firewall Explicit Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/explicit-proxy.md
With the Explicit proxy mode (supported for HTTP/S), you can define proxy settin
The SAS URL must have READ permissions so the firewall can upload the file. If changes are made to the PAC file, a new SAS URL needs to be generated and configured on the firewall **Enable explicit proxy** page. :::image type="content" source="media/explicit-proxy/shared-access-signature.png" alt-text="Screenshot showing generate shared access signature.":::+ ## Next steps
-To learn how to deploy an Azure Firewall, see [Deploy and configure Azure Firewall using Azure PowerShell](deploy-ps.md).
+- To learn more about Explicit proxy, see [Demystifying Explicit proxy: Enhancing Security with Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/demystifying-explicit-proxy-enhancing-security-with-azure/ba-p/3873445).
+- To learn how to deploy an Azure Firewall, see [Deploy and configure Azure Firewall using Azure PowerShell](deploy-ps.md).
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
This enables the following scenarios:
- **DNAT** - You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses. - **SNAT** - More ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. At this time, Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
+For more information about NAT behaviors, see [Azure Firewall NAT Behaviors](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-nat-behaviors/ba-p/3825834).
++ ## Azure Monitor logging All events are integrated with Azure Monitor, allowing you to archive logs to a storage account, stream events to your event hub, or send them to Azure Monitor logs. For Azure Monitor log samples, see [Azure Monitor logs for Azure Firewall](./firewall-workbook.md).
firewall Firewall Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-sftp.md
Remove-AzResourceGroup -Name $rg -Force
## Next steps
+- [Deploy Azure Firewall to inspect traffic to a private endpoint](https://techcommunity.microsoft.com/t5/azure-network-security-blog/deploy-azure-firewall-to-inspect-traffic-to-a-private-endpoint/ba-p/3714575)
- [Azure Firewall FTP support](ftp-support.md)
firewall Ftp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ftp-support.md
For more information, see [Microsoft.Network azureFirewalls](/azure/templates/mi
## Next steps
-To learn how to deploy an Azure Firewall, see [Deploy and configure Azure Firewall using Azure PowerShell](deploy-ps.md).
+- To learn more about FTP scenarios, see [Validating FTP traffic scenarios with Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/validating-ftp-traffic-scenarios-with-azure-firewall/ba-p/3880683).
+- To learn how to deploy an Azure Firewall, see [Deploy and configure Azure Firewall using Azure PowerShell](deploy-ps.md).
firewall Protect Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-office-365.md
For more information, see [Use FQDN filtering in network rules](fqdn-filtering-n
## Next steps
+- For more information, see [Protect Office365 and Windows365 with Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/protect-office365-and-windows365-with-azure-firewall/ba-p/3824533).
- Learn more about Office 365 network connectivity: [Microsoft 365 network connectivity overview](/microsoft-365/enterprise/microsoft-365-networking-overview)
frontdoor Classic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/classic-overview.md
Title: Azure Front Door (classic)
-description: This article provides an overview of Azure Front Door (classic).
+ Title: Azure Front Door (classic) overview
+description: This article provides an overview of the Azure Front Door (classic) service.
Previously updated : 06/15/2022 Last updated : 08/09/2023 # customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
Azure Front Door (classic) is a global, scalable entry-point that uses the Micro
:::image type="content" source="./media/front-door-overview/front-door-visual-diagram.png" alt-text="Diagram of Azure Front Door (classic) routing user traffic to endpoints.":::
-Front Door (classic) works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your routing method you can ensure that Front Door (classic) will route your client requests to the fastest and most available application backend. An application backend is any Internet-facing service hosted inside or outside of Azure. Front Door (classic) provides a range of [traffic-routing methods](front-door-routing-methods.md) and [backend health monitoring options](front-door-health-probes.md) to suit different application needs and automatic failover scenarios. Similar to [Traffic Manager](../traffic-manager/traffic-manager-overview.md), Front Door (classic) is resilient to failures, including failures to an entire Azure region.
+Front Door (classic) works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your routing method you can, ensure that Front Door (classic) routes your client requests to the fastest and most available application backend. An application backend is any Internet-facing service hosted inside or outside of Azure. Front Door (classic) provides a range of [traffic-routing methods](front-door-routing-methods.md) and [backend health monitoring options](front-door-health-probes.md) to suit different application needs and automatic failover scenarios. Similar to [Traffic Manager](../traffic-manager/traffic-manager-overview.md), Front Door (classic) is resilient to failures, including failures to an entire Azure region.
>[!NOTE] > Azure provides a suite of fully managed load-balancing solutions for your scenarios.
For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pr
Subscribe to the RSS feed and view the latest Azure Front Door feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=Azure%20Front%20Door) page. ## Next steps+ - Learn how to [create a Front Door (classic)](quickstart-create-front-door.md).-- Learn [how Front Door (classic) works](front-door-routing-architecture.md?pivots=front-door-classic).
+- Learn about [how Front Door (classic) works](front-door-routing-architecture.md?pivots=front-door-classic).
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile - Terraform'
+ Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform'
description: This quickstart describes how to create an Azure Front Door Standard/Premium using Terraform. Previously updated : 10/25/2022 Last updated : 8/11/2023
+content_well_notification:
+ - AI-contribution
-# Create a Front Door Standard/Premium profile using Terraform
+# Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform
This quickstart describes how to use Terraform to create a Front Door profile to set up high availability for a web endpoint. [!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)]
-The steps in this article were tested with the following Terraform and Terraform provider versions:
+In this article, you learn how to:
-- [Terraform v1.3.2](https://releases.hashicorp.com/terraform/)-- [AzureRM Provider v.3.27.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a random value for the Front Door endpoint resource name and App Service app name using [random_id](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id).
+> * Create a Front Door profile using [azurerm_cdn_frontdoor_profile](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_frontdoor_profile).
+> * Create a Front Door endpoint using [azurerm_cdn_frontdoor_endpoint](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_frontdoor_endpoint).
+> * Create a Front Door origin group using [azurerm_cdn_frontdoor_origin_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_frontdoor_origin_group)
+> * Create a Front Door origin, which refers to the App Service app, using [azurerm_cdn_frontdoor_origin](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_frontdoor_origin).
+> * Create a Front Door route using [azurerm_cdn_frontdoor_route](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_frontdoor_route).
+> * Create an App Service plan using [azurerm_service_plan](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/service_plan).
+> * Create an App Service app using [azurerm_windows_web_app](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_web_app).
## Prerequisites - - [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)-- IP address or FQDN of a website or web application. ## Implement the Terraform code
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-standard-premium). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-standard-premium/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+ 1. Create a directory in which to test the sample Terraform code and make it the current directory. 1. Create a file named `providers.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/providers.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-standard-premium/providers.tf":::
-1. Create a file named `resource-group.tf` and insert the following code:
+1. Create a file named `main.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/resource-group.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-standard-premium/main.tf":::
1. Create a file named `app-service.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/app-service.tf)]
-
-1. Create a file named `front-door.tf` and insert the following code:
-
- [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/front-door.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-standard-premium/app-service.tf":::
1. Create a file named `variables.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/variables.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-standard-premium/variables.tf":::
1. Create a file named `outputs.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/outputs.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-standard-premium/outputs.tf":::
## Initialize Terraform
The steps in this article were tested with the following Terraform and Terraform
## Verify the results
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [Portal](#tab/Portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Resource groups** from the left pane.
-
-1. Select the FrontDoor resource group.
+1. Get the Front Door endpoint:
-1. Select the Front Door you created and you'll be able to see the endpoint hostname. Copy the hostname and paste it on to the address bar of a browser. Press enter and your request will automatically get routed to the web app.
+ ```console
+ terraform output -raw frontDoorEndpointHostName
+ ```
- :::image type="content" source="./media/create-front-door-bicep/front-door-bicep-web-app-origin-success.png" alt-text="Screenshot of the message: Your web app is running and waiting for your content.":::
+1. Paste the endpoint into a browser.
-# [Azure CLI](#tab/CLI)
-
-Run the following command:
-
-```azurecli-interactive
-az resource list --resource-group FrontDoor
-```
-
-# [PowerShell](#tab/PowerShell)
-
-Run the following command:
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName FrontDoor
-```
--
+ :::image type="content" source="./media/create-front-door-terraform/endpoint.png" alt-text="Screenshot of a successful connection to endpoint.":::
## Clean up resources
Get-AzResource -ResourceGroupName FrontDoor
## Next steps
-In this quickstart, you deployed a simple Front Door profile using Terraform. [Learn more about Azure Front Door.](front-door-overview.md)
+> [!div class="nextstepaction"]
+> [Overview of Azure Front Door](front-door-overview.md)
frontdoor Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/endpoint.md
Title: 'Endpoints in Azure Front Door'
+ Title: Endpoints in Azure Front Door
+ description: Learn about endpoints when using Azure Front Door. -+ Previously updated : 06/22/2022- Last updated : 08/09/2023+ # Endpoints in Azure Front Door
-In Azure Front Door Standard/Premium, an *endpoint* is a logical grouping of one or more routes that are associated with domain names. Each endpoint is [assigned a domain name](#endpoint-domain-names) by Front Door, and you can associate your own custom domains by using routes.
+In Azure Front Door, an *endpoint* is a logical grouping of one or more routes that are associated with domain names. Each endpoint is [assigned a domain name](#endpoint-domain-names) by Front Door, and you can associate your own custom domains by using routes.
## How many endpoints should I create?
When you're planning the endpoints to create, consider the following factors:
Endpoint domain names are automatically generated when you create a new endpoint. Front Door generates a unique domain name based on several components, including: - The endpoint's name.-- A pseudorandom hash value, which is determined by Front Door. By using hash values as part of the domain name, Front Door helps to protect against [subdomain takeover](../security/fundamentals/subdomain-takeover.md) attacks.-- The base domain name for your Front Door environment. This is generally `z01.azurefd.net`.
+- A pseudorandom hash value, which gets determined by Front Door. By using hash values as part of the domain name, Front Door helps to protect against [subdomain takeover](../security/fundamentals/subdomain-takeover.md) attacks.
+- The base domain name for your Front Door environment. Generally is `z01.azurefd.net`.
For example, suppose you have created an endpoint named `myendpoint`. The endpoint domain name might be `myendpoint-mdjf2jfgjf82mnzx.z01.azurefd.net`.
The following table lists the allowable values for the endpoint's domain reuse b
|--|--| | `TenantReuse` | This is the default value. Endpoints with the same name in the same Azure Active Directory tenant receive the same domain label. | | `SubscriptionReuse` | Endpoints with the same name in the same Azure subscription receive the same domain label. |
-| `ResourceGroupReuse` | Endpoints with the same name in the same resource group will receive the same domain label. |
-| `NoReuse` | Endpoints will always receive a new domain label. |
+| `ResourceGroupReuse` | Endpoints with the same name in the same resource group receives the same domain label. |
+| `NoReuse` | Endpoints always receive a new domain label. |
> [!NOTE] > You can't modify the reuse behavior of an existing Front Door endpoint. The reuse behavior only applies to newly created endpoints.
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Title: 'Tutorial: Configure HTTPS on a custom domain for Azure Front Door (classic)'
+ Title: Configure HTTPS on a Front Door (classic) custom domain
+ description: In this tutorial, you learn how to enable and disable HTTPS on your Azure Front Door (classic) configuration for a custom domain. - Previously updated : 06/06/2022+ Last updated : 08/09/2023 - #Customer intent: As a website owner, I want to enable HTTPS on the custom domain in my Front Door (classic) so that my users can use my custom domain to access their content securely.
-# Tutorial: Configure HTTPS on a Front Door (classic) custom domain
+# Configure HTTPS on a Front Door (classic) custom domain
-This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door (classic) under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, `https://www.contoso.com`), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+This article shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door (classic) under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, `https://www.contoso.com`), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site using HTTPS, it validates the web site's security certificate and verifies if issued by a legitimate certificate authority. This process provides security and protects your web applications from malicious attacks.
-Azure Front Door supports HTTPS on a Front Door default hostname, by default. For example, if you create a Front Door (such as `https://contoso.azurefd.net`), HTTPS is automatically enabled for requests made to `https://contoso.azurefd.net`. However, once you onboard the custom domain 'www.contoso.com' you'll need to additionally enable HTTPS for this frontend host.
+Azure Front Door supports HTTPS on a Front Door default hostname, by default. For example, if you create a Front Door (such as `https://contoso.azurefd.net`), HTTPS is automatically enabled for requests made to `https://contoso.azurefd.net`. However, once you onboard the custom domain 'www.contoso.com' you need to additionally enable HTTPS for this frontend host.
Some of the key attributes of the custom HTTPS feature are:
Before you can complete the steps in this tutorial, you must first create a Fron
## TLS/SSL certificates
-To enable the HTTPS protocol for securely delivering content on a Front Door custom domain, you must use a TLS/SSL certificate. You can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
+To enable the HTTPS protocol for securely delivering content on a Front Door (classic) custom domain, you must use a TLS/SSL certificate. You can choose to use a certificate that gets managed by Azure Front Door or use your own certificate.
### Option 1 (default): Use a certificate managed by Front Door
-When you use a certificate managed by Azure Front Door, the HTTPS feature can be turned on with just a few clicks. Azure Front Door completely handles certificate management tasks such as procurement and renewal. After you enable the feature, the process starts immediately. If the custom domain is already mapped to the Front Door's default frontend host (`{hostname}.azurefd.net`), no further action is required. Front Door will process the steps and complete your request automatically. However, if your custom domain is mapped elsewhere, you must use email to validate your domain ownership.
+When you use a certificate managed by Azure Front Door, the HTTPS feature can be turned on with a few setting changes. Azure Front Door completely handles certificate management tasks such as procurement and renewal. After you enable the feature, the process starts immediately. If the custom domain is already mapped to the Front Door's default frontend host (`{hostname}.azurefd.net`), no further action is required. Front Door processes the steps and completes your request automatically. However, if your custom domain is mapped elsewhere, you must use email to validate your domain ownership.
To enable HTTPS on a custom domain, follow these steps:
To enable HTTPS on a custom domain, follow these steps:
### Option 2: Use your own certificate
-You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests that involve that certificate are not guaranteed to work as expected.
+You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a nonallowed CA, your request is rejected. If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected.
#### Prepare your key vault and certificate
Azure Front Door can now access this key vault and the certificates it contains.
## Validate the domain
-If you already have a custom domain in use that gets mapped to your custom endpoint with a CNAME record or you're using your own certificate, continue to [Custom domain is mapped to your Front Door](#custom-domain-is-mapped-to-your-front-door-by-a-cname-record). Otherwise, if the CNAME record entry for your domain no longer exists or it contains the afdverify subdomain, continue to [Custom domain is not mapped to your Front Door](#custom-domain-is-not-mapped-to-your-front-door).
+If you already have a custom domain in use that gets mapped to your custom endpoint with a CNAME record or you're using your own certificate, continue to [Custom domain is mapped to your Front Door](#custom-domain-is-mapped-to-your-front-door-by-a-cname-record). Otherwise, if the CNAME record entry for your domain no longer exists or it contains the afdverify subdomain, continue to [Custom domain isn't mapped to your Front Door](#custom-domain-isnt-mapped-to-your-front-door).
### Custom domain is mapped to your Front Door by a CNAME record
Your CNAME record should be in the following format, where *Name* is your custom
For more information about CNAME records, see [Create the CNAME DNS record](../cdn/cdn-map-content-to-custom-domain.md).
-If your CNAME record is in the correct format, DigiCert automatically verifies your custom domain name and creates a dedicated certificate for your domain name. DigitCert won't send you a verification email and you won't need to approve your request. The certificate is valid for one year and will be autorenewed before it expires. Continue to [Wait for propagation](#wait-for-propagation).
+If your CNAME record is in the correct format, DigiCert automatically verifies your custom domain name and creates a dedicated certificate for your domain name. DigitCert doesn't send you a verification email and you don't need to approve your request. The certificate is valid for one year and autorenews before it expires. Continue to [Wait for propagation](#wait-for-propagation).
Automatic validation typically takes a few mins. If you don't see your domain validated within an hour, open a support ticket. >[!NOTE] >If you have a Certificate Authority Authorization (CAA) record with your DNS provider, it must include DigiCert as a valid CA. A CAA record allows domain owners to specify with their DNS providers which CAs are authorized to issue certificates for their domain. If a CA receives an order for a certificate for a domain that has a CAA record and that CA is not listed as an authorized issuer, it is prohibited from issuing the certificate to that domain or subdomain. For information about managing CAA records, see [Manage CAA records](https://support.dnsimple.com/articles/manage-caa-record/). For a CAA record tool, see [CAA Record Helper](https://sslmate.com/caa/).
-### Custom domain is not mapped to your Front Door
+### Custom domain isn't mapped to your Front Door
If the CNAME record entry for your endpoint no longer exists or it contains the afdverify subdomain, follow the rest of the instructions in this step.
-After you enable HTTPS on your custom domain, the DigiCert CA validates ownership of your domain by contacting its registrant, according to the domain's [WHOIS](http://whois.domaintools.com/) registrant information. Contact is made via the email address (by default) or the phone number listed in the WHOIS registration. You must complete domain validation before HTTPS will be active on your custom domain. You have six business days to approve the domain. Requests that aren't approved within six business days are automatically canceled. DigiCert domain validation works at the subdomain level. You'll need to prove ownership of each subdomain separately.
+After you enable HTTPS on your custom domain, the DigiCert CA validates ownership of your domain by contacting its registrant, according to the domain's [WHOIS](http://whois.domaintools.com/) registrant information. Contact is made via the email address (by default) or the phone number listed in the WHOIS registration. You must complete domain validation before HTTPS is active on your custom domain. You have six business days to approve the domain. Requests that aren't approved within six business days are automatically canceled. DigiCert domain validation works at the subdomain level. You need to prove ownership of each subdomain separately.
![WHOIS record](./media/front-door-custom-domain-https/whois-record.png)
webmaster@&lt;your-domain-name.com&gt;
hostmaster@&lt;your-domain-name.com&gt; postmaster@&lt;your-domain-name.com&gt;
-You should receive an email in a few minutes, similar to the following example, asking you to approve the request. If you are using a spam filter, add no-reply@digitalcertvalidation.com to its allowlist. Under certain scenarios, DigiCert may be unable to fetch the domain contacts from the WHOIS registrant information to send you an email. If you don't receive an email within 24 hours, contact Microsoft support.
+You should receive an email in a few minutes, similar to the following example, asking you to approve the request. If you're using a spam filter, add no-reply@digitalcertvalidation.com to its allowlist. Under certain scenarios, DigiCert may be unable to fetch the domain contacts from the WHOIS registrant information to send you an email. If you don't receive an email within 24 hours, contact Microsoft support.
When you select the approval link, you're directed to an online approval form. Follow the instructions on the form; you have two verification options:
When you select the approval link, you're directed to an online approval form. F
- You can approve just the specific host name used in this request. Extra approval is required for subsequent requests.
-After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year and will be autorenewed before it's expired.
+After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year and gets autorenew before it expires.
## Wait for propagation
After the domain name is validated, it can take up to 6-8 hours for the custom d
### Operation progress
-The following table shows the operation progress that occurs when you enable HTTPS. After you enable HTTPS, four operation steps appear in the custom domain dialog. As each step becomes active, more substep details appear under the step as it progresses. Not all of these substeps will occur. After a step successfully completes, a green check mark appears next to it.
+The following table shows the operation progress that occurs when you enable HTTPS. After you enable HTTPS, four operation steps appear in the custom domain dialog. As each step becomes active, more substep details appear under the step as it progresses. Not all of these substeps occur. After a step successfully completes, a green check mark appears next to it.
| Operation step | Operation substep details | | | |
-| 1 Submitting request | Submitting request |
+| 1. Submitting request | Submitting request |
| | Your HTTPS request is being submitted. | | | Your HTTPS request has been submitted successfully. |
-| 2 Domain validation | Domain is automatically validated if it's CNAME mapped to the default .azurefd.net frontend host of your Front Door. Otherwise, a verification request will be sent to the email listed in your domain's registration record (WHOIS registrant). Verify the domain as soon as possible. |
+| 2. Domain validation | Domain is automatically validated if it's CNAME mapped to the default .azurefd.net frontend host of your Front Door. Otherwise, a verification request is sent to the email listed in your domain's registration record (WHOIS registrant). Verify the domain as soon as possible. |
| | Your domain ownership has been successfully validated. | | | Domain ownership validation request expired (customer likely didn't respond within 6 days). HTTPS won't be enabled on your domain. * |
-| | Domain ownership validation request was rejected by the customer. HTTPS won't be enabled on your domain. * |
-| 3 Certificate provisioning | The certificate authority is currently issuing the certificate needed to enable HTTPS on your domain. |
+| | Domain ownership validation request rejected by the customer. HTTPS won't be enabled on your domain. * |
+| 3. Certificate provisioning | The certificate authority is currently issuing the certificate needed to enable HTTPS on your domain. |
| | The certificate has been issued and is currently being deployed for your Front Door. This process could take from several minutes to an hour to complete. | | | The certificate has been successfully deployed for your Front Door. |
-| 4 Complete | HTTPS has been successfully enabled on your domain. |
+| 4. Complete | HTTPS has been successfully enabled on your domain. |
\* This message doesn't appear unless an error has occurred.
In the preceding steps, you enabled the HTTPS protocol on your custom domain. If
2. In the list of frontend hosts, select the custom domain for which you want to disable HTTPS.
-3. Click **Disabled** to disable HTTPS, then click **Save**.
+3. Select **Disabled** to disable HTTPS, then select **Save**.
### Wait for propagation
The following table shows the operation progress that occurs when you disable HT
| Operation progress | Operation details | | | |
-| 1 Submitting request | Submitting your request |
-| 2 Certificate deprovisioning | Deleting certificate |
-| 3 Complete | Certificate deleted |
+| 1. Submitting request | Submitting your request |
+| 2. Certificate deprovisioning | Deleting certificate |
+| 3. Complete | Certificate deleted |
## Next steps
-In this tutorial, you learned how to:
-
-* Upload a certificate to Key Vault.
-* Validate a domain.
-* Enable HTTPS for your custom domain.
-
-To learn how to set up a geo-filtering policy for your Front Door, continue to the next tutorial.
-
-> [!div class="nextstepaction"]
-> [Set up a geo-filtering policy](front-door-geo-filtering.md)
+To learn how to [set up a geo-filtering policy](front-door-geo-filtering.md) for your Front Door, continue to the next tutorial.
frontdoor Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/manager.md
Title: Azure Front Door manager
-description: This article is about concepts of the Front Door manager. You'll learn about routes and security policies in an endpoint.
+ Title: Front Door manager
+
+description: This article is about how Front Door manager can help you manage your routing and security policy for an endpoint.
Previously updated : 06/13/2022 Last updated : 08/09/2023
An [*endpoint*](endpoint.md) is a logical grouping of one or more routes that ar
> * Traffic will only flow to origins once both the endpoint and route is **enabled**. >
-Domains configured within a route can either be a custom domain or an endpoint domain. For more information about custom domains, see [create a custom domain](standard-premium/how-to-add-custom-domain.md) with Azure Front Door. Endpoint domains refer to the auto generated domain name when you create a new endpoint. The name is a unique endpoint hostname with a hash value in the format of `endpointname-hash.z01.azurefd.net`. The endpoint domain will be accessible if you associate it with a route.
+Domains configured within a route can either be a custom domain or an endpoint domain. For more information about custom domains, see [create a custom domain](standard-premium/how-to-add-custom-domain.md) with Azure Front Door. Endpoint domains refer to the auto generated domain name when you create a new endpoint. The name is a unique endpoint hostname with a hash value in the format of `endpointname-hash.z01.azurefd.net`. The endpoint domain is accessible if you associate it with a route.
## Security policy in an endpoint
-A security policy is an association of one or more domains with a Web Application Firewall (WAF) policy. The WAF policy will provide centralized protection for your web applications. If you manage security policies using the Azure portal, you can only associate a security policy with domains that are in the Routes configuration of that endpoint.
+A security policy is an association of one or more domains with a Web Application Firewall (WAF) policy. The WAF policy provides centralized protection for your web applications. If you manage security policies using the Azure portal, you can only associate a security policy with domains that are in the Routes configuration of that endpoint.
> [!TIP] > * If you see one of your domains is unhealthy, you can select the domain to take you to the domains page. From there you can take appropriate actions to troubleshoot the unhealthy domain.
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
Title: 'Quickstart: Create an Azure Front Door (classic) - Terraform'
+ Title: 'Quickstart: Create an Azure Front Door (classic) using Terraform'
description: This quickstart describes how to create an Azure Front Door Service using Terraform. Previously updated : 10/25/2022 Last updated : 8/11/2023
+content_well_notification:
+ - AI-contribution
-# Create a Front Door (classic) using Terraform
+# Quickstart: Create an Azure Front Door (classic) using Terraform
This quickstart describes how to use Terraform to create a Front Door (classic) profile to set up high availability for a web endpoint.
-The steps in this article were tested with the following Terraform and Terraform provider versions:
+In this article, you learn how to:
-- [Terraform v1.3.2](https://releases.hashicorp.com/terraform/)-- [AzureRM Provider v.3.27.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a random value for the Front Door endpoint host name using [random_id](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id).
+> * Create a Front Door (classic) resource using - [azurerm_frontdoor](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/frontdoor).
## Prerequisites
The steps in this article were tested with the following Terraform and Terraform
## Implement the Terraform code
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+ 1. Create a directory in which to test the sample Terraform code and make it the current directory. 1. Create a file named `providers.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/providers.tf)]
-
-1. Create a file named `resource-group.tf` and insert the following code:
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-classic/providers.tf":::
- [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/resource-group.tf)]
+1. Create a file named `main.tf` and insert the following code:
-1. Create a file named `front-door.tf` and insert the following code:
-
- [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/front-door.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-classic/main.tf":::
1. Create a file named `variables.tf` and insert the following code:
- [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/variables.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-classic/variables.tf":::
-1. Create a file named `terraform.tfvars` and insert the following code, being sure to update the value to your own backend hostname:
+1. Create a file named `outputs.tf` and insert the following code, being sure to update the value to your own backend hostname:
- ```terraform
- backend_address = "<your backend hostname>"
- ```
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-front-door-classic/outputs.tf":::
## Initialize Terraform
The steps in this article were tested with the following Terraform and Terraform
## Verify the results
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [Portal](#tab/Portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Resource groups** from the left pane.
-
-1. Select the FrontDoor resource group.
+1. Get the Front Door endpoint:
-1. Select the Front Door you created and you'll be able to see the endpoint hostname. Copy the hostname and paste it on to the address bar of a browser. Press enter and your request will automatically get routed to the web app.
-
- :::image type="content" source="./media/create-front-door-bicep/front-door-bicep-web-app-origin-success.png" alt-text="Screenshot of the message: Your web app is running and waiting for your content.":::
-
-# [Azure CLI](#tab/CLI)
-
-Run the following command:
-
-```azurecli-interactive
-az resource list --resource-group FrontDoor
-```
-
-# [PowerShell](#tab/PowerShell)
-
-Run the following command:
+ ```console
+ terraform output -raw frontDoorEndpointHostName
+ ```
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName FrontDoor
-```
+1. Paste the endpoint into a browser.
-
+ :::image type="content" source="./media/quickstart-create-front-door-terraform/endpoint.png" alt-text="Screenshot of a successful connection to endpoint.":::
## Clean up resources
Get-AzResource -ResourceGroupName FrontDoor
## Next steps
-In this quickstart, you deployed a simple Front Door (classic) profile using Terraform. [Learn more about Azure Front Door.](front-door-overview.md)
+> [!div class="nextstepaction"]
+> [Overview of Azure Front Door](front-door-overview.md)
global-secure-access How To Compliant Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-compliant-network.md
description: Learn how to require known compliant network locations in order to
Previously updated : 07/27/2023 Last updated : 08/09/2023
The following example shows a Conditional Access policy that requires Exchange O
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](#user-exclusions). 1. Under **Target resources** > **Include**, and select **Select apps**.
- 1. Choose **Office 365 Exchange Online** and **Office 365 SharePoint Online**.
+ 1. Choose **Office 365 Exchange Online** and/or **Office 365 SharePoint Online**.
+ 1. Office 365 apps are currently NOT supported, so do not select this option.
1. Under **Conditions** > **Location**. 1. Set **Configure** to **Yes** 1. Under **Include**, select **Any location**.
global-secure-access How To Configure Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-connectors.md
Previously updated : 06/27/2023 Last updated : 08/09/2023
User identities must be synchronized from an on-premises directory or created di
To use Application Proxy, you need a Windows server running Windows Server 2012 R2 or later. You'll install the Application Proxy connector on the server. This connector server needs to connect to the Application Proxy services in Azure, and the on-premises applications that you plan to publish.
-For high availability in your environment, we recommend having more than one Windows server.
+- For high availability in your environment, we recommend having more than one Windows server.
+- The minimum .NET version required for the connector is v4.7.1+.
+- For more information, see [App Proxy connectors](../active-directory/app-proxy/application-proxy-connectors.md#requirements-and-deployment).
+- For more information, see [Determine which .NET framework versions are installed](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed).
### Prepare your on-premises environment
To use Private Access, install a connector on each Windows server you're using f
> Setting up App Proxy connectors and connector groups require planning and testing to ensure you have the right configuration for your organization. If you don't already have connector groups set up, pause this process and return when you have a connector group ready. > >The minimum version of connector required for Private Access is **1.5.3417.0**.
+>Starting from the version 1.5.3437.0, having the .NET version 4.7.1 or greater is required for successful installation (upgrade).
+ **To install the connector**:
governance Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/assignments.md
Title: Understand machine configuration assignment resources description: Machine configuration creates extension resources named machine configuration assignments that map configurations to machines. Previously updated : 04/18/2023 Last updated : 08/10/2023 # Understand machine configuration assignment resources
more information, see [getting compliance data][02].
When an Azure Policy assignment is deleted, if the policy created a machine configuration assignment, the machine configuration assignment is also deleted.
-When an Azure Policy assignment is deleted, you need to manually delete any machine configuration
-assignments the policy created. You can do so by navigating to the guest assignments page on Azure
-portal and deleting the assignment there.
+When an Azure Policy assignment is deleted from an initiative, you need to manually delete any
+machine configuration assignments the policy created. You can do so by navigating to the guest
+assignments page on Azure portal and deleting the assignment there.
## Manually creating machine configuration assignments
governance How To Create Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-package.md
Title: How to create custom machine configuration package artifacts description: Learn how to create a machine configuration package file. Previously updated : 05/16/2023 Last updated : 08/11/2023 # How to create custom machine configuration package artifacts
both Windows and Linux. The DSC configuration defines the condition that the mac
> To use machine configuration packages that apply configurations, Azure VM guest configuration > extension version 1.26.24 or later, or Arc agent 1.10.0 or later, is required. >
-> The **GuestConfiguration** module is only available on Ubuntu 18. However, the package and
-> policies produced by the module can be used on any Linux distribution and version supported in
-> Azure or Arc.
+> The **GuestConfiguration** module is only available on Ubuntu 18 and later. However, the package
+> and policies produced by the module can be used on any Linux distribution and version supported
+> in Azure or Arc.
> > Testing packages on macOS isn't available. >
governance How To Sign Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-sign-package.md
Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
## Certificate requirements
-The machine configuration agent expects the certificate public key to be present in "Trusted Root
-Certificate Authorities" on Windows machines and in the path `/usr/local/share/ca-certificates/gc`
+The machine configuration agent expects the certificate public key to be present in "Trusted Publishers" on Windows machines and in the path `/usr/local/share/ca-certificates/gc`
on Linux machines. For the node to verify signed content, install the certificate public key on the machine before applying the custom policy. This process can be done using any technique inside the VM or by using Azure Policy. An example template is available
governance How To Test Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-test-package.md
Title: How to test machine configuration package artifacts description: The experience creating and testing packages that audit or apply configurations to machines. Previously updated : 04/18/2023 Last updated : 08/11/2023 # How to test machine configuration package artifacts
Before you can begin testing, you need to [set up your authoring environment][01
> To use machine configuration packages that apply configurations, Azure VM guest configuration > extension version 1.26.24 or later, or Arc agent 1.10.0 or later, is required. >
-> The **GuestConfiguration** module is only available on Ubuntu 18. However, the package and
-> policies produced by the module can be used on any Linux distro/version supported in Azure or
+> The **GuestConfiguration** module is only available on Ubuntu 18 and later. However, the package
+> and policies produced by the module can be used on any Linux distro/version supported in Azure or
> Arc. > > Testing packages on macOS isn't available.
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
see [Tag support for Azure resources](../../../azure-resource-manager/management
The following Resource Provider modes are fully supported: - `Microsoft.Kubernetes.Data` for managing Kubernetes clusters and components such as pods, containers, and ingresses. Supported for Azure Kubernetes Service clusters and [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md). Definitions
- using this Resource Provider mode use effects _audit_, _deny_, and _disabled_.
+ using this Resource Provider mode use effects _audit_, _deny_, and _disabled_.
- `Microsoft.KeyVault.Data` for managing vaults and certificates in [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy definitions, see
you can reuse that policy for different scenarios by using different values.
> **defaultValue** property. This prevents existing assignments of the policy or initiative from > indirectly being made invalid.
+> [!NOTE]
+> Parameters can't be removed from a policy definition that's been assigned.
+ ### Parameter properties A parameter has the following properties that are used in the policy definition:
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-networking](../../../../includes/resource-graph/samples/bycat/networking.md)] + ## Resource health [!INCLUDE [azure-resource-graph-samples-cat-resource-health](../../../../includes/resource-graph/samples/bycat/resource-health.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [azure-resource-graph-samples-table-resources](../../../../includes/resource-graph/samples/bytable/resources.md)] + ## SecurityResources [!INCLUDE [azure-resource-graph-samples-table-securityresources](../../../../includes/resource-graph/samples/bytable/securityresources.md)]
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/operations/azure-operations-guide.md
For more information, see [Get started with Azure Table storage](../../cosmos-db
#### Queue storage Azure Queue storage provides cloud messaging between application components. In designing applications for scale, application components are often decoupled so that they can scale independently. Queue storage delivers asynchronous messaging for communication between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device. Queue storage also supports managing asynchronous tasks and building process workflows.
-For more information, see [Get started with Azure Queue storage](/azure/storage/queues/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli).
+For more information, see [Get started with Azure Queue storage](/azure/storage/queues/).
### Deploying a storage account
hdinsight Connect Kafka With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/connect-kafka-with-vnet.md
+
+ Title: Connect HDInsight Kafka cluster with client VM in different VNet on Azure HDInsight
+description: Learn how to connect HDInsight Kafka cluster with Client VM in different VNet on Azure HDInsight
++ Last updated : 08/10/2023++
+# Connect HDInsight Kafka cluster with client VM in different VNet
+
+This article describes the steps to set up the connectivity between a virtual machine (VM) and HDInsight Kafka cluster residing in two different virtual networks (VNet).
+
+## Connect HDInsight Kafka cluster with client VM in different VNet
+
+1. Create two different virtual networks where HDInsight Kafka cluster and VM are hosted respectively. For more information, see [Create a virtual network using Azure portal](/azure/virtual-network/quick-create-portal).
+
+1. Peer these two virtual networks, so that IP addresses of their subnets must not overlap with each other. For more information, see [Connect virtual networks with virtual network peering using the Azure portal](/azure/virtual-network/tutorial-connect-virtual-networks-portal).
+
+1. Ensure that the peering status shows as connected.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/vnet-peering.png" alt-text="Screenshot showing VNet peering." border="true" lightbox="./media/connect-kafka-with-vnet/vnet-peering.png":::
+
+1. Create HDInsight Kafka cluster in first VNet `hdi-primary-vnet`. For more information, see [Create an HDInsight Kafka cluster](./apache-kafka-get-started.md#create-an-apache-kafka-cluster).
+
+1. Create a virtual machine (VM) in the second VNet `hilo-secondary-vnet`. While creating the VM, specify the second VNet name where this virtual machine must be deployed. For more information, see [Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu).
+
+ > [!NOTE]
+ > IPs of Kafka VMs never change if VM is present in cluster. Only when you manually replace VM from the cluster then, that IP changes. You can check the latest IPs from Ambari portal.
+
+## Methods to connect to HDInsight Kafka cluster from client VM
+
+1. Configure Kafka for IP advertising: Use `Kafka IP advertising` to populate Kafka worker node private IPs in different vnet. Once IP advertising is done, use private DNS setup for DNS resolution of worker nodes FQDN.
+1. Update /etc/hosts file in client machine: Update `/etc/hosts` file in client machine with `/etc/hosts` file of Kafka Head/Worker node.
+
+> [!NOTE]
+> * Private DNS setup is optional after IP advertising. This is required only when you want to use FQDN of Kafka worker nodes with private DNS domain name instead of private IPs.
+> * IPs of Kafka VMs never change if VM is present in cluster. Only when you manually replace VM from the cluster then, that IP changes. You can check the latest IPs from Ambari portal.
+
+### Configure Kafka for IP advertising
+This configuration allows the client to connect using broker IP addresses instead of domain names. By default, Apache Zookeeper returns the domain name of the Kafka brokers to clients.
+
+This configuration doesn't work with the VPN software client, as it can't use name resolution for entities in the virtual network.
+
+Use the following steps to configure HDInsight Kafka to advertise IP addresses instead of domain names:
+
+1. Using a web browser, go to `https://CLUSTERNAME.azurehdinsight.net`. Replace `CLUSTERNAME` with the HDInsight Kafka cluster name.
+1. When prompted, use the HTTPS `username` and `password` for the cluster. The Ambari Web UI for the cluster is displayed.
+1. To view information on Kafka, select `Kafka` from the left panel and then select configs.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/kafka-config.png" alt-text="Screenshot showing Kafka VNet configurations." border="true" lightbox="./media/connect-kafka-with-vnet/kafka-config.png":::
+
+1. To access `kafka-env` configuration on the Ambari dashboard, just type `kafka-env` in the top right filter field in Ambari UI.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/kafka-env.png" alt-text="Screenshot showing Kafka environment." border="true" lightbox="./media/connect-kafka-with-vnet/kafka-env.png":::
+
+1. To configure Kafka to advertise IP addresses, add the following text to the bottom of the `kafka-env-template` field:
+
+ ```shell
+ # Configure Kafka to advertise IP addresses instead of FQDN
+ IP_ADDRESS=$(hostname -i)
+ echo advertised.listeners=$IP_ADDRESS
+ sed -i.bak -e '/advertised/{/advertised@/!d;}' /usr/hdp/current/kafka-broker/conf/server.properties
+ echo "advertised.listeners=PLAINTEXT://$IP_ADDRESS:9092" >> /usr/hdp/current/kafka-broker/conf/server.properties
+ ```
+1. To configure the interface that Kafka listens on, enter `listeners` in the filter field on the top right.
+
+1. To configure Kafka to listen on all network interfaces, change the value in the `listeners` field to `PLAINTEXT://0.0.0.0:9092`.
+1. To save the configuration changes, use the `Save` button. Enter a text message describing the changes. Select `OK` once the changes have been saved.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/save-kafka-broker.png" alt-text="Screenshot showing the save button." border="true" lightbox="./media/connect-kafka-with-vnet/save-kafka-broker.png":::
+
+1. To prevent errors when restarting Kafka, use the `Actions` button and select `Turn On Maintenance Mode`. Select `OK` to complete this operation.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/action-button.png" alt-text="Screenshot showing action button." border="true" lightbox="./media/connect-kafka-with-vnet/action-button.png":::
+
+1. To restart Kafka, use the `Restart` button and select `Restart All Affected`. Confirm the restart, and then use the `OK` button after the operation has completed.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/restart-button.png" alt-text="Screenshot showing how to restart." border="true" lightbox="./media/connect-kafka-with-vnet/restart-button.png":::
+
+1. To disable maintenance mode, use the `Actions` button and select `Turn Off Maintenance Mode`. Select `OK` to complete this operation.
+1. Now you can execute your jobs from client VM with Kafka IP address. To check IP address of worker nodes from Ambari Portal click on `Hosts` on left panel.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/ambari-hosts.png" alt-text="Screenshot showing the worker node IP for Ambari." border="true" lightbox="./media/connect-kafka-with-vnet/ambari-hosts.png":::
+
+1. Use Sample git repository to create Kafka topics](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started), to produce and consume data from that topic.
+
+ ```shell
+ # In previous example IP of worker node 0 is `broker1-ip` and worker node 1 is `broker2-ip`
+ # Create Kafka Topic
+ java -jar kafka-producer-consumer.jar create <topic_name> $KAFKABROKERS
+ java -jar kafka-producer-consumer.jar create test broker1-ip:9092,broker1-ip:9092
+ ```
+ :::image type="content" source="./media/connect-kafka-with-vnet/create-topic.png" alt-text="Screenshot showing how to create Kafka topic." border="true" lightbox="./media/connect-kafka-with-vnet/create-topic.png":::
+
+ ```shell
+ # Produce Data in Topic
+ java -jar kafka-producer-consumer.jar producer <topic_name> $KAFKABROKERS
+ java -jar kafka-producer-consumer.jar producer test broker1-ip:9092, broker2-ip:9092
+ ```
+ :::image type="content" source="./media/connect-kafka-with-vnet/producer.png" alt-text="Screenshot showing how to view Kafka producer." border="true" lightbox="./media/connect-kafka-with-vnet/producer.png":::
+
+ ```shell
+ # Consume Data from Topic
+ java -jar kafka-producer-consumer.jar consumer <topic_name> $KAFKABROKERS
+ java -jar kafka-producer-consumer.jar consumer test broker1-ip:9092,broker2-ip:9092
+ ```
+ :::image type="content" source="./media/connect-kafka-with-vnet/consumer.png" alt-text="Screenshot showing Kafka consumer section." border="true" lightbox="./media/connect-kafka-with-vnet/consumer.png":::
+
+ > [!NOTE]
+ > It is recommended to add all the brokers IP in **$KAFKABROKERS** for fault tolerance.
+
+### Update /etc/hosts file in client machine
+
+1. Copy the highlighted worker nodes entries of the file `/etc/host` from Kafka headnode to Client VM.
+
+1. After these entries are made, try to reach the Kafka Ambari dashboard using the web browser or curl command using the hn0 or hn1 FQDN as:
+
+ #### If Client VM is using Linux OS
+
+ ```
+ # Execute curl command
+ curl hn0-hdi-ka.mvml5coqo4xuzc1nckq1sltcxf.bx.internal.cloudapp.net:8080
+
+ # Output
+ <!--
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ -->
+ <!DOCTYPE html>
+ <html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <link rel="stylesheet" href="stylesheets/vendor.css">
+ <link rel="stylesheet" href="stylesheets/app.css">
+ <script src="javascripts/vendor.js"></script>
+ <script src="javascripts/app.js"></script>
+ <script>
+ $(document).ready(function() {
+ require('initialize');
+ // make favicon work in firefox
+ $('link[type*=icon]').detach().appendTo('head');
+ $('#loading').remove();
+ });
+ </script>
+ <title>Ambari</title>
+ <link rel="shortcut icon" href="/img/logo.png" type="image/x-icon">
+ </head>
+ <body>
+ <div id="loading">...Loading...</div>
+ <div id="wrapper">
+ <!-- ApplicationView -->
+ </div>
+ <footer>
+ <div class="container footer-links">
+ <a data-qa="license-link" href="http://www.apache.org/licenses/LICENSE-2.0" target="_blank">Licensed under the Apache License, Version 2.0</a>.
+ <a data-qa="third-party-link" href="/licenses/NOTICE.txt" target="_blank">See third-party tools/resources that Ambari uses and their respective authors</a>
+ </div>
+ </footer>
+ </body>
+ </html>
+ ```
+
+### If Client VM is using Windows OS
+
+1. Go to overview page of `hdi-kafka` and click on Ambari view to get the URL.
+
+1. Put the login credential as username `admin` and password `YOUR_PASSWORD`, which you have set while creating cluster.
+
+ > [!NOTE]
+ > 1. In Windows VM, static hostnames need to be added in the host file which present in the path `C:\Windows\System32\drivers\etc\`.
+ > 1. This article assumes that the Ambari server is active on `Head Node 0`. If the Ambari server is active on `Head Node 1` use the FQDN of hn1 to access the Ambari UI.
+
+ :::image type="content" source="./media/connect-kafka-with-vnet/dashboard.png" alt-text="Screenshot showing the dashboard." border="true" lightbox="./media/connect-kafka-with-vnet/dashboard.png":::
+
+1. You can also send messages to kafka topic and read the topics from the VM. For that you can try to use this sample java application.
+
+1. Use sample git repository to create Kafka topics, produce and consume data from that topic. For more information, see [hdinsight-kafka-java-getting-started](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started).
+
+1. You can use FQDN, IP or short name(first six letters of cluster name) of brokers to pass as `KAFKABROKERS` in the following commands.
+
+ ```
+ # In the previous example # IP of worker node 0 is `broker1-ip` and worker node 1 is `broker2-ip`
+ # Short Name of worker node 0 is `wn0-hdi-ka` and worker node 1 is `wn1-hdi-ka` # FQDN of worker node 0 is `wn0-hdi-ka.mvml5coqo4xuzc1nckq1sltcxf.bx.internal.cloudapp.net` and worker node 1 is `wn1-hdi-ka.mvml5coqo4xuzc1nckq1sltcxf.bx.internal.cloudapp.net`
+
+ # Create Kafka Topic
+ java -jar kafka-producer-consumer.jar create <topic_name> $KAFKABROKERS
+ java -jar kafka-producer-consumer.jar create test broker1-ip:9092,broker2-ip:9092
+
+ # Produce Data in Topic
+ java -jar kafka-producer-consumer.jar producer <topic_name> $KAFKABROKERS
+ java -jar kafka-producer-consumer.jar producer test wn0-hdi-ka:9092,wn1-hdi-ka:9092
+
+ # Consume Data from Topic
+ java -jar kafka-producer-consumer.jar consumer <topic_name> $KAFKABROKERS
+ java -jar kafka-producer-consumer.jar consumer test wn0-hdi-ka.mvml5coqo4xuzc1nckq1sltcxf.bx.internal.cloudapp.net:9092,wn1-hdi-ka.mvml5coqo4xuzc1nckq1sltcxf.bx.internal.cloudapp.net:9092
+ ```
+> [!NOTE]
+> It is recommended to add all the brokers IP, FQDN or short name in $KAFKABROKERS for fault tolerance.
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Azure API for FHIR is a fully managed service, based on Fast Healthcare Interope
The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes.
- ## How to enable DR
+## How to enable DR
To enable the DR feature, create a one-time support ticket. You can choose an Azure paired region or another region where the Azure API for FHIR is supported. The Microsoft support team will enable the DR feature based on the support priority.
By default, Azure API for FHIR offers data protection through backup and restore
It's worth noting that the throughput RU/s must have the same values in the primary and secondary regions.
-[ ![Azure Traffic Manager.](media/disaster-recovery/azure-traffic-manager.png) ](media/disaster-recovery/azure-traffic-manager.png#lightbox)
+[![Diagram that shows Azure Traffic Manager.](media/disaster-recovery/azure-traffic-manager.png)](media/disaster-recovery/azure-traffic-manager.png#lightbox)
### Automatic failover During a primary region outage, the Azure API for FHIR automatically fails over to the secondary region and the same service endpoint is used. The service is expected to resume in one hour or less, and potential data loss is up to 15 minutes' worth of data. Other configuration changes may be required. For more information, see [Configuration changes in DR](#configuration-changes-in-dr).
-[ ![Failover in disaster recovery.](media/disaster-recovery/failover-in-disaster-recovery.png) ](media/disaster-recovery/failover-in-disaster-recovery.png#lightbox)
+[![Diagram that shows failover in disaster recovery.](media/disaster-recovery/failover-in-disaster-recovery.png)](media/disaster-recovery/failover-in-disaster-recovery.png#lightbox)
### Affected region recovery After the affected region recovers, it's automatically available as a secondary region and data replication restarts. You can start the data recovery process or wait until the failback step is completed.
-[ ![Replication in disaster recovery.](media/disaster-recovery/replication-in-disaster-recovery.png) ](media/disaster-recovery/replication-in-disaster-recovery.png#lightbox)
+[![Diagram that shows replication in disaster recovery.](media/disaster-recovery/replication-in-disaster-recovery.png)](media/disaster-recovery/replication-in-disaster-recovery.png#lightbox)
When the compute has failed back to the recovered region and the data hasn't, there may be potential network latencies. The main reason is that the compute and the data are in two different regions. The network latencies should disappear automatically as soon as the data fails back to the recovered region through a manual trigger.
-[ ![Network latency.](media/disaster-recovery/network-latency.png) ](media/disaster-recovery/network-latency.png#lightbox)
-
+[![Diagram that shows network latency.](media/disaster-recovery/network-latency.png)](media/disaster-recovery/network-latency.png#lightbox)
### Manual failback The compute fails back automatically to the recovered region. The data is switched back to the recovered region manually by the Microsoft support team using the script.
-[ ![Failback in disaster recovery.](media/disaster-recovery/failback-in-disaster-recovery.png) ](media/disaster-recovery/failback-in-disaster-recovery.png#lightbox)
+[![Diagram that shows failback in disaster recovery.](media/disaster-recovery/failback-in-disaster-recovery.png)](media/disaster-recovery/failback-in-disaster-recovery.png#lightbox)
## Configuration changes in DR
The export job will be picked up from another region after 10 minutes without an
Ensure that you grant the same permissions to the system identity of the Azure API for FHIR. Also, if the storage account is configured with selected networks, see [How to export FHIR data](../fhir/export-data.md).
-### IoMT FHIR Connector
-
-Any existing connection won't function until the failed region is restored. You can create a new connection once the failover has completed and your FHIR server is accessible. This new connection will continue to function when failback occurs.
-
-> [!NOTE]
-> IoMT Connector is a preview feature and does not provide support for disaster recovery.
- ## How to test DR While not required, you can test the DR feature on a non-production environment. For DR test, only the data will be included and the compute won't be included.
Consider the following steps for DR test.
* (Optional) Share any feedback with the Microsoft support team. - > [!NOTE] > The DR test will double the cost of your test environment during the test. No extra cost will incur after the DR test is completed and the DR feature is disabled.
The disaster recovery feature incurs extra costs because data of the compute and
> [!NOTE] > The DR offering is subject to the [SLA for Azure API for FHIR](https://azure.microsoft.com/pricing/details/health-data-services), 1.0. - ## Next steps In this article, you've learned how DR for Azure API for FHIR works and how to enable it. To learn about Azure API for FHIR's other supported features, see
->[!div class="nextstepaction"]
->[FHIR supported features](fhir-features-supported.md)
+> [!div class="nextstepaction"]
+> [FHIR supported features](fhir-features-supported.md)
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
You're now ready to export FHIR data to the storage account securely. Note that
> [!IMPORTANT] > The user interface will be updated later to allow you to select the Resource type for Azure API for FHIR and a specific service instance.
-### Allowing specific IP addresses for the Azure storage account in a different region
-
-Select **Networking** of the Azure storage account from the
-portal.
-
-Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to
-allow access from the internet or your on-premises networks. You can
-find the IP address in the table below for the Azure region where the
-Azure API for FHIR is provisioned.
-
-|**Azure Region** |**Public IP Address** |
-|:-|:-|
-| Australia East | 20.53.44.80 |
-| Canada Central | 20.48.192.84 |
-| Central US | 52.182.208.31 |
-| East US | 20.62.128.148 |
-| East US 2 | 20.49.102.228 |
-| East US 2 EUAP | 20.39.26.254 |
-| Germany North | 51.116.51.33 |
-| Germany West Central | 51.116.146.216 |
-| Japan East | 20.191.160.26 |
-| Korea Central | 20.41.69.51 |
-| North Central US | 20.49.114.188 |
-| North Europe | 52.146.131.52 |
-| South Africa North | 102.133.220.197 |
-| South Central US | 13.73.254.220 |
-| Southeast Asia | 23.98.108.42 |
-| Switzerland North | 51.107.60.95 |
-| UK South | 51.104.30.170 |
-| UK West | 51.137.164.94 |
-| West Central US | 52.150.156.44 |
-| West Europe | 20.61.98.66 |
-| West US 2 | 40.64.135.77 |
-
-> [!NOTE]
-> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure the ACR firewall](../../healthcare-apis/fhir/configure-settings-convert-data.md#step-6-configure-the-azure-container-registry-firewall-for-secure-access).
-
-### Allowing specific IP addresses for the Azure storage account in the same region
-
-The configuration process is the same as above except a specific IP
-address range in CIDR format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request.
-
-> [!NOTE]
-> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
## Next steps
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
In this document we go over the three steps used in configuring import settings
## Step 1: Enable managed identity on the FHIR service
-The first step is to enable system wide managed identity on the service. This will be used to grant FHIR service an access to the storage account.
+The first step is to enable system wide managed identity on the service. This will be used to grant FHIR service access to the storage account.
For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). Follow the steps to enable managed identity on FHIR service
Do following changes to JSON:
After you've completed this final step, you're ready to perform **Incremental mode** import using $import.
-Note : You can also use the **Deploy to Azure** button to open custom Resource Manager template that updates the configuration for $import.
+Note that you can also use the **Deploy to Azure** button to open custom Resource Manager template that updates the configuration for $import.
[![Deploy to Azure Button.](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Ffhir-import%2Fazuredeploy.json)
After you've executed above command, in the **Firewall** section under **Resourc
You're now ready to securely import FHIR data from the storage account. The storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable [private endpoints](../../storage/common/storage-private-endpoints.md) for the storage account.
-### Option 2: Allowing specific IP addresses to access the Azure storage account
-#### Option 2.1: Access storage account provisioned in different Azure region than FHIR service
-
-In the Azure portal, go to the ADLS Gen2 account and select the **Networking** blade.
-
-Select **Enabled from selected virtual networks and IP addresses**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks. You can find the IP address in the table below for the Azure region where the FHIR service is provisioned.
-
-|**Azure Region** |**Public IP Address** |
-|:-|:-|
-| Australia East | 20.53.44.80 |
-| Canada Central | 20.48.192.84 |
-| Central US | 52.182.208.31 |
-| East US | 20.62.128.148 |
-| East US 2 | 20.49.102.228 |
-| East US 2 EUAP | 20.39.26.254 |
-| Germany North | 51.116.51.33 |
-| Germany West Central | 51.116.146.216 |
-| Japan East | 20.191.160.26 |
-| Korea Central | 20.41.69.51 |
-| North Central US | 20.49.114.188 |
-| North Europe | 52.146.131.52 |
-| South Africa North | 102.133.220.197 |
-| South Central US | 13.73.254.220 |
-| Southeast Asia | 23.98.108.42 |
-| Switzerland North | 51.107.60.95 |
-| UK South | 51.104.30.170 |
-| UK West | 51.137.164.94 |
-| West Central US | 52.150.156.44 |
-| West Europe | 20.61.98.66 |
-| West US 2 | 40.64.135.77 |
-
-#### Option 2.2: Access storage account provisioned in same Azure region as FHIR service
-
-The configuration process for IP addresses in the same region is just like above except a specific IP address range in Classless Inter-Domain Routing (CIDR) format is used instead (that is, 100.64.0.0/10). The reason why the IP address range (100.64.0.0 ΓÇô 100.127.255.255) must be specified is because an IP address for the FHIR service will be allocated each time an `$import` request is made.
-
-> [!Note]
-> It is possible that a private IP address within the range of 10.0.2.0/24 may be used, but there is no guarantee that the `$import` operation will succeed in such a case. You can retry if the `$import` request fails, but until an IP address within the range of 100.64.0.0/10 is used, the request will not succeed. This network behavior for IP address ranges is by design. The alternative is to configure the storage account in a different region.
-
+### Option 2:
## Next steps
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using AHDS Samples OSS
+## SMART on FHIR Enhanced using Azure Health Data Services Samples
### Step 1 : Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes. ### Step 2 : FHIR server integration with samples
-[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+For integration with Azure Health Data Services samples, you would need to follow the steps in samples open source solution.
+
+**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This steps listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE]
-> Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
+> Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg) compliance, using Azure Active Directory as the identity provider workflow.
## SMART on FHIR Proxy <details> <summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
+> This is another option to using "SMART on FHIR Enhanced using AHDS Samples" mentioned above. We suggest you to adopt SMART on FHIR enhanced. SMART on FHIR Proxy option is legacy option.
+> SMART on FHIR enhanced version provides added capabilities than SMART on FHIR proxy. SMART on FHIR enhanced capability can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
+ ### Step 1 : Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |
+|API queries to FHIR service returned Internal Server error in UK south region |August 10th 2023 9:53 am PST|--|August 10th 2023 10:43 am PST|
|FHIR resources are not queryable by custom search parameters even after reindex is successful.| July 2023| Suggested workaround is to create support ticket to update the status of custom search parameters after reindex is successful.|--| |Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |- | Resolved, customers impacted with 128 characters issue are notified on resolution. | |The SQL provider causes the `RawResource` column in the database to save incorrectly. This occurs in a few cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) |
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Previously updated : 07/23/2023 Last updated : 08/09/2023 # Internet peering for Peering Service Voice Services walkthrough
For optimized routing for voice services infrastructure prefixes, you must regis
Ensure that the registered prefixes are announced over the direct interconnects established with your peering. If the same prefix is announced in multiple peering locations, you do NOT have to register the prefix in every single location. A prefix can only be registered with a single peering. When you receive the unique prefix key after validation, this key will be used for the prefix even in locations other than the location of the peering it was registered under.
-1. To begin registration, go to your peering in the Azure portal and select **Registered prefixes**.
+1. To begin registration, go to your peering in the Azure portal and select **Registered prefixes** under **Settings**.
- :::image type="content" source="./media/walkthrough-communications-services-partner/registered-asn.png" alt-text="Screenshot shows how to go to Registered ASNs from the Peering Overview page in the Azure portal.":::
-
-1. Select **Add registered prefix**.
+1. In **Registered prefixes**, select **+ Add registered prefix**.
:::image type="content" source="./media/walkthrough-communications-services-partner/add-registered-prefix.png" alt-text="Screenshot of Registered prefix page in the Azure portal.":::
internet-peering Walkthrough Direct Peering Type Conversions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-direct-peering-type-conversions.md
+
+ Title: Direct peering type conversion request walkthrough
+
+description: Learn how to request a type conversion for a Direct Peering.
++++ Last updated : 08/11/2023++
+# Direct peering type conversion request walkthrough
+
+In this article, you learn how to use the Azure portal to request a type conversion on a direct peering.
+
+## Prerequisites
+
+A direct peering type conversion for a peering connection can only be requested if the following prerequisites apply:
+- The peering must have at least two connections.
+- The redundant connections must be of equal bandwidth
+- All connections in the peering must be fully provisioned (with the property 'ConnectionState' = 'Active') that is, none of the connections must be undergoing provisioning, decommission, or an internal device migration.
+- The peering must be represented as an Azure resource with a valid subscription. To onboard your peering as a resource, see [Convert a legacy Direct peering to an Azure resource using the Azure portal](howto-legacy-direct-portal.md).
+- Bandwidth updates can't be requested to other connections in the peering during the conversion.
+- No adding or removing of connections can occur during the conversion.
+- Type conversions run during the business hours of the Pacific Time zone.
+- For Voice conversions, the connection session addresses are provided by Microsoft and enabled with BFD (Bidirectional Forwarding Detection). It's expected that the partners set up their configurations accordingly.
+
+## Configure the new type on a Direct Peering
+
+### Convert from PNI to Voice
+A peering with standard PNI(s) or PNI(s) enabled for Azure Peering Service can be converted to Voice PNI(s). This conversion must be made at the peering level, which means all the connections within the peering are converted.
+
+1. Go to the **Configuration** page of your peering.
+
+1. Select the **AS8075 (with Voice)** option and then select **Save**.
+
+ :::image type="content" source="./media/walkthrough-direct-peering-type-conversions/conversion-selection.png" alt-text="Screenshot shows how to change Microsoft network in the Conversions page of the peering in the Azure portal." lightbox="./media/walkthrough-direct-peering-type-conversions/conversion-selection.png":::
+
+### Enable Peering Service on a connection
+
+A standard PNI within a peering can be enabled for Peering Service and can be requested per connection.
+
+You need to be a Peering Service partner to enable Peering Service on a connection. See the [partner requirements page](prerequisites.md) and make sure you have signed the agreement with Microsoft. For questions, reach out to [Azure Peering group](mailto:peeringservice@microsoft.com).
+
+1. Go to the **Connection** page of your peering.
+
+1. Select the ellipsis (...) next to the connection that you want to edit and select **Edit connection**.
+
+ :::image type="content" source="./media/walkthrough-direct-peering-type-conversions/view-connection.png" alt-text="Screenshot shows how to select a connection to edit in the Connections page of a peering in the Azure portal." lightbox="./media/walkthrough-direct-peering-type-conversions/view-connection.png":::
+
+1. In the **Direct Peering Connection** page, select **Enabled** for **Use for Peering Service**, and then select **Save**.
+
+ :::image type="content" source="./media/walkthrough-direct-peering-type-conversions/edit-connection.png" alt-text="Screenshot shows how to edit a connection." lightbox="./media/walkthrough-direct-peering-type-conversions/edit-connection.png":::
+
+Once the request is received, the **Connection State** on each of the connections changes to **TypeChangeRequested**.
+
+## Conversion approval
+
+Your request is reviewed and approved by someone from the internal team.
+
+Connections remain in the **TypeChangeRequested** state until they're approved. After approval, the connections converted one at a time to ensure that the redundant connection(s) are always up and carrying traffic. The **Connection State** on the connection(s) changes to **TypeChangeInProgress**.
+You can see this state in the Connection page.
+
+## Monitor the conversion
+
+When your connection enters the conversion process, its state is labeled as **TypeChangeInProgress**.
+
+You're kept up to date through emails at the following steps:
+
+- Request Received
+- Request Approved
+- Session Address Changes (if any)
+- Conversion complete
+- Peering Azure Resource removal (if any)
+- Request rejected
+- Action required from peering partner
+
+The email notifications are sent to the peer email contact provided during the *Peer Asn* resource creation. You can either reply back to the emails or contact [Azure Peering group](mailto:peeringservice@microsoft.com) if you have questions.
+
+If a conversion to Voice is requested and the connections already have IP addresses provided by Microsoft, set up BFD on your sessions as early as possible to avoid any downtime. The conversion process for Voice waits for both the BGP and BFD sessions to come up before allowing any traffic on the sessions.
+
+If a conversion to Voice is requested and the connections have IP addresses provided by the peering partner, wait for the email notification with the new Microsoft provided IP addresses and configure them on your end along with BFD. Once the BGP and BFD sessions with the new IP addresses come up, traffic is allowed on this session and the session with the old IP addresses will be shut down. There's no downtime in this case.
+
+Once the conversion is completed its state returns to **Active**.
+
+## FAQ
+
+**Q.** Will there be an interruption to my connection?
+
+**A.** We do our absolute best and take various steps to prevent any interruption to service. These steps include:
+- Guaranteeing that a redundant connection with equivalent bandwidth is up at the time of conversion.
+- Performing any conversions one connection at a time.
+- Only bringing down old connections if it's necessary (in the case of a type conversion while the IP address stays the same).
+- Only performing conversions at times where engineers are online and capable of helping remedy any unlikely issues.
+
+**Q.** Why has my request to convert the type of direct peering been rejected?
+
+**A.** Verify if the peering satisfies all the requirements from the [Prerequisites](#prerequisites) section.
+
+**Q.** Why has my request to enable Peering Service on a connection been rejected?
+
+**A.** To enable Peering Service on a connection, see [partner requirements page](prerequisites.md) and make sure you have signed the agreement with Microsoft. For questions, reach out to the [Azure Peering group](mailto:peeringservice@microsoft.com). Verify if the peering satisfies all the requirements from the [Prerequisites](#prerequisites) section.
+
+**Q.** How long does it take for the conversion to complete?
+
+**A.** For conversions that don't involve any IP address changes, if the expected setup is done by the peering partner, the conversion should be completed within two business days. For conversions involving IP addresses change, there's an extra delay in reserving new addresses internally and considering the delay in peering partner finishing their end of the configuration, expect the process to take up to five business days.
+
+**Q.** Is there an impact on traffic for the whole-time conversion happens?
+
+**A.** Conversion process involves several stages and not all stages have traffic impact. Draining the traffic, configuring new policies pertaining to the type of peering, and allowing the traffic back once BGP and BFD come up are done serially. Combined these steps usually take ~2 hrs given the peering partner complete their end of the configurations. For Voice conversions, ensure that the BFD setup is done on time to ensure minimal downtime. For conversions that involve a change in IP addresses, there's almost zero downtime, since the traffic is seamlessly shifted to the session with the new addresses from the old session after which the old session is shut down.
+
+**Q.** How do I know which connection to configure the new Microsoft provided IP addresses?
+
+**A.** The email notification lists the connection details with both the old peer provided IP addresses and the corresponding new Microsoft provided IP addresses.
+
+**Q.** Why is my peering stuck with ConnectionState as 'TypeChangeInProgress' or 'ProvisioningFailed' for a long time?
+
+**A.** This state could be either due to a configuration or an internal error or the process could be waiting for the peering partner side of configurations. We monitor and catch these issues and give you an email notification promptly. If you have further questions, contact the [Azure Peering group](mailto:peeringservice@microsoft.com) for resolution.
+
+**Q.** I have two different peerings, Peering A with standard PNI(s) connections and peering B with Voice connections. I would like to convert the standard PNI peering connections to Voice. What happens to the peering resources in this case?
+
+**A.** Once Peering A is converted from PNIs to Voice, the connections from Peering A are moved to Peering B, and Peering A is deleted. For example: If Peering A with two PNI connections are converted to Voice, and Peering B already has two connections, the process results in Peering B (the Voice peering) having four connections now and the Peering A resource will be removed. This is by design so that we maintain only one peering for a given peering provider and type of direct peering at a given location.
+
+**Q.** I have more questions, what is the best way to contact you?
+
+**A.** Contact the [Azure Peering group](mailto:peeringservice@microsoft.com).
internet-peering Walkthrough Monitoring Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-monitoring-telemetry.md
+
+ Title: Monitoring and telemetry walkthrough
+
+description: Learn how to monitor and view telemetry of an Internet peering using the Azure portal.
++++ Last updated : 08/11/2023++
+# Peering Service monitoring and telemetry walkthrough
+
+In this article, as an Internet peering partner (direct or exchange), you learn how to use the Azure portal to view various metrics associated with a direct or exchange peering.
+
+## View received routes
+
+You can view the routes announced to Microsoft in the Azure portal:
+
+1. Go to the peering, and then under the **Settings**, select **Received Routes**.
+
+ :::image type="content" source="./media/walkthrough-monitoring-telemetry/peering-received-routes.png" alt-text="Screenshot shows how to view received routes in the Azure portal." lightbox="./media/walkthrough-monitoring-telemetry/peering-received-routes.png":::
+
+## View peering metrics
+
+You can view the following metrics for a peering in the **Connections** page of a peering:
+
+- Session availability
+- Ingress traffic rate
+- Egress traffic rate
+- Flap events count
+- Packet drop rate
++
+## View registered prefix latency
+
+As an Azure Peering Service partner or communication services partner, you can view your registered prefix latency in the **Overview** page of the registered prefix.
++
+## View customer prefix latency
+
+As an Azure Peering Service Exchange Route Server partner, you can view the average latency for all of your customer prefixes in the **Overview** page of the registered ASN.
++
+## View Peering Service metrics
+
+All Peering Service resources display the session availability metric for their Peering Service in the **Overview** page of a Peering Service resource.
+
+- Provider primary peering session availability: indicates the state of the BGP (Border Gateway Protocol) session between the Peering Service provider and Microsoft at the primary peering location.
+
+- Provider backup peering session availability: indicates the state of the BGP session between the Peering Service provider and Microsoft at the backup peering location if there's one selected for the Peering Service.
+
+ :::image type="content" source="./media/walkthrough-monitoring-telemetry/peering-service-session-availability.png" alt-text="Screenshot shows how to view the provider peering session availability for a specific Peering Service in the Azure portal." lightbox="./media/walkthrough-monitoring-telemetry/peering-service-session-availability.png":::
+
+## View peering service prefix metrics
+
+All Peering Service prefix resources display the following metrics for their Peering Service prefix in their Peering Service page.
+
+- Peering service prefix latency: shows the median latencies observed over time for prefixes registered under a Peering Service. Latency for prefixes that are smaller than /24 are shown at the /24 level.
+
+ :::image type="content" source="./media/walkthrough-monitoring-telemetry/peering-service-prefix-latency-telemetry.png" alt-text="Screenshot shows how to view the peering service prefix latency under a specific peering service prefix in the Azure portal." lightbox="./media/walkthrough-monitoring-telemetry/peering-service-prefix-latency-telemetry.png":::
+
+- Peering service prefix events: shows various BGP events like route announcements, withdrawals and routes becoming active on the primary or backup links for each prefix in the **Prefix events** page of the Peering Service prefix.
+
+ :::image type="content" source="./media/walkthrough-monitoring-telemetry/peering-service-prefix-events.png" alt-text="Screenshot shows how to view the prefix events under a specific peering service prefix in the Azure portal." lightbox="./media/walkthrough-monitoring-telemetry/peering-service-prefix-events.png":::
internet-peering Walkthrough Peering Service All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md
Previously updated : 07/23/2023 Last updated : 08/09/2023 # Internet peering for Azure Peering Service partner walkthrough
For optimized routing for infrastructure prefixes, you must register them.
Ensure that the registered prefixes are announced over the direct interconnects established with your peering. If the same prefix is announced in multiple peering locations, you do NOT have to register the prefix in every single location. A prefix can only be registered with a single peering. When you receive the unique prefix key after validation, this key will be used for the prefix even in locations other than the location of the peering it was registered under.
-1. To begin registration, go to your peering in the Azure portal and select **Registered prefixes**.
+1. To begin registration, go to your peering in the Azure portal and select **Registered prefixes** under **Settings**.
- :::image type="content" source="./media/walkthrough-peering-service-all/registered-asn.png" alt-text="Screenshot shows how to go to Registered ASNs from the Peering Overview page in the Azure portal.":::
-
-1. Select **Add registered prefix**.
+1. In **Registered prefixes**, select **+ Add registered prefix**.
:::image type="content" source="./media/walkthrough-peering-service-all/add-registered-prefix.png" alt-text="Screenshot of Registered prefix page in the Azure portal.":::
iot-central Howto Use Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-audit-logs.md
Title: Use Azure IoT Central audit logs
-description: Learn how to use audit logs in IoT Central to track changes made by users or programatically in an IoT Central application
+description: Learn how to use audit logs in IoT Central to track changes made by users or programmatically in an IoT Central application
Previously updated : 07/25/2022 Last updated : 08/14/2023
-# Administrator
+# CustomerIntent: As an administrator, I want to be able to track changes made to my IoT Central application so that I can understand who made what changes at what time.
# Use audit logs to track activity in your IoT Central application
This article describes how to use audit logs to track who made what changes at w
- Manage access to the audit log. - Export the audit log records.
-The audit log records information about who made a change, information about the modified entity, the action that made change, and when the change was made. The log tracks changes made through the UI, programatically with the REST API, and through the CLI.
+The audit log records information about who made a change, information about the modified entity, the action that made change, and when the change was made. The log tracks changes made through the UI, programmatically with the REST API, and through the CLI.
-The log records changes to the following IoT Central entities:
+The log records changes made to the following IoT Central entities:
- [Users](howto-manage-users-roles.md#add-users) - [Roles](howto-manage-users-roles.md#manage-roles)
The log records changes to the following IoT Central entities:
- [Device templates](howto-set-up-template.md) - [Device lifecycle events](howto-export-to-blob-storage.md#device-lifecycle-changes-format)
+> [!NOTE]
+> The audit log doesn't record when users sign in or out of your IoT Central application.
+ The log records changes made by the following types of user: - IoT Central user - the log shows the user's email.
The following screenshot shows the audit log view with the location of the sorti
:::image type="content" source="media/howto-use-audit-logs/audit-log.png" alt-text="Screenshot that shows the audit log. The location of the sort and filter controls is highlighted." lightbox="media/howto-use-audit-logs/audit-log.png":::
+> [!TIP]
+> If you want to monitor the health of your connected devices, use Azure Monitor. To learn more, see [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
+ ## Customize the log Select **Column options** to customize the audit log view. You can add and remove columns, reorder the columns, and change the column widths:
iot-develop Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/iot-device-selection.md
Following is a comparison table of SBCs in alphabetical order. Note this is an i
| Board Name | Price Range (USD) | What is it used for? | Software| Speed | Processor | Memory | Onboard Sensors and Other Features | IO Pins | Video | Radio | Battery Connector? | Operating Voltage | Getting Started Guides | **Alternatives** | | - | - | - | -| - | - | - | - | - | - | - | - | - | - | -|
-| [Raspberry Pi 4, Model B](https://aka.ms/IotDeviceList/RpiModelB) | ~$30 - $80 | Home automation; Robotics; Autonomous vehicles; Control systems; Field science | Raspberry Pi OS, Raspbian, Ubuntu 20.04/21.04, RISC OS, Windows 10 IoT, more | 1.5 GHz CPU, 500 MHz GPU | 64-bit Broadcom BCM2711 (quad-core Cortex-A72), VideoCore VI GPU | 2GB/4GB/8GB LPDDR4 RAM, SD Card (not included) | 2 x USB 3 ports, 1 x MIPI DSI display port, 1 x MIPI CSI camera port, 4-pole stereo audio and composite video port, Power over Ethernet (requires HAT) | 26 x Digital, 4 x PWM | 2 micro-HDMI composite, MPI DSI | WiFi, Bluetooth | √ | 5 V | 1. [Send data to IoT Hub](https://www.hackster.io/jenfoxbot/how-to-send-see-data-from-a-raspberry-pi-to-azure-iot-hub-908924), 2. [Monitor plants with Azure IoT](https://github.com/microsoft/IoT-For-Beginners/tree/main/2-farm/lessons/4-migrate-your-plant-to-the-cloud)| [BeagleBone Black Wireless (~$50 - $60)](https://aka.ms/IotDeviceList/BeagleBoard) |
+| [Raspberry Pi 4, Model B](https://aka.ms/IotDeviceList/RpiModelB) | ~$30 - $80 | Home automation; Robotics; Autonomous vehicles; Control systems; Field science | Raspberry Pi OS, Raspbian, Ubuntu 20.04/21.04, RISC OS, Windows 10 IoT, more | 1.5 GHz CPU, 500 MHz GPU | 64-bit Broadcom BCM2711 (quad-core Cortex-A72), VideoCore VI GPU | 2GB/4GB/8GB LPDDR4 RAM, SD Card (not included) | 2 x USB 3 ports, 1 x MIPI DSI display port, 1 x MIPI CSI camera port, 4-pole stereo audio and composite video port, Power over Ethernet (requires HAT) | 26 x Digital, 4 x PWM | 2 micro-HDMI composite, MPI DSI | WiFi, Bluetooth | √ | 5 V | 1. [Send data to IoT Hub](https://www.hackster.io/jenfoxbot/how-to-send-see-data-from-a-raspberry-pi-to-azure-iot-hub-908924), 2. [Monitor plants with Azure IoT](https://github.com/microsoft/IoT-For-Beginners/tree/main/2-farm/lessons/4-migrate-your-plant-to-the-cloud)| [BeagleBone Black Wireless (~$50 - $60)](https://www.beagleboard.org/boards/beaglebone-black-wireless) |
| [NVIDIA Jetson 2 GB Nano Dev Kit](https://aka.ms/IotDeviceList/NVIDIAJetson) | ~$50 - $100 | AI/ML; Autonomous vehicles | Ubuntu-based JetPack | 1.43 GHz CPU, 921 MHz GPU | 64-bit Nvidia CPU (quad-core Cortex-A57), 128-CUDA-core Maxwell GPU coprocessor | 2GB/4GB LPDDR4 RAM | 472 GFLOPS for AI Perf, 1 x MIPI CSI-2 connector | 28 x Digital, 2 x PWM | HDMI, DP (4 GB only) | Gigabit Ethernet, 802.11ac WiFi | √ | 5 V | [Deepstream integration with Azure IoT Central](https://www.hackster.io/pjdecarlo/nvidia-deepstream-integration-with-azure-iot-central-d9f834) | [BeagleBone AI (~$110 - $120)](https://aka.ms/IotDeviceList/BeagleBoneAI) | | [Raspberry Pi Zero W2](https://aka.ms/IotDeviceList/RpiZeroW) | ~$15 - $20 | Home automation; ML; Vehicle modifications; Field Science | Raspberry Pi OS, Raspbian, Ubuntu 20.04/21.04, RISC OS, Windows 10 IoT, more | 1 GHz CPU, 400 MHz GPU | 64-bit Broadcom BCM2837 (quad-core Cortez-A53), VideoCore IV GPU | 512 MB LPDDR2 RAM, SD Card (not included) | 1 x CSI-2 Camera connector | 26 x Digital, 4 x PWM | Mini-HDMI | WiFi, Bluetooth | - | 5 V | [Send and visualize data to Azure IoT Hub](https://www.hackster.io/jenfoxbot/how-to-send-see-data-from-a-raspberry-pi-to-azure-iot-hub-908924) | [Onion Omega2+ (~$10 - $15)](https://onion.io/Omega2/) | | [DFRobot LattePanda](https://aka.ms/IotDeviceList/DFRobotLattePanda) | ~$100 - $160 | Home automation; Hyperscale cloud connectivity; AI/ML | Windows 10, Ubuntu 16.04, OpenSuSE 15 | 1.92 GHz | 64-bit Intel Z8350 (quad-core x86-64), Atmega32u4 coprocessor | 2 GB DDR3L RAM, 32 GB eMMC/4GB DDR3L RAM, 64-GB eMMC | - | 6 x Digital (20 x via Atmega32u4), 6 x PWM, 12 x ADC | HDMI, MIPI DSI | WiFi, Bluetooth | √ | 5 V | 1. [Getting started with Microsoft Azure](https://www.hackster.io/45361/dfrobot-lattepanda-with-microsoft-azure-getting-started-0ae8fb), 2. [Home Monitoring System with Azure](https://www.hackster.io/JiongShi/home-monitoring-system-based-on-lattepanda-zigbee-and-azure-ce4e03)| [Seeed Odyssey X86J4125800 (~$210 - $230)](https://aka.ms/IotDeviceList/SeeedOdyssey) |
Other helpful resources include:
- [Overview of Azure IoT device types](./concepts-iot-device-types.md) - [Overview of Azure IoT Device SDKs](./about-iot-sdks.md) - [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](./quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c)-- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
+- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
iot-edge Module Deployment Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-deployment-monitoring.md
For example, you have a deployment with a target condition `tags.environment = '
If a deployment has no target condition, then it's applied to no devices.
-Use any Boolean condition on device twin tags, device twin reported properties, or deviceId to select the target devices. If you want to use a condition with tags, you need to add a `"tags":{}` section in the device twin under the same level as properties. [Learn more about tags in a device twin](../iot-hub/iot-hub-devguide-device-twins.md).
+Use any Boolean condition on device twin tags, device twin reported properties, or deviceId to select the target devices. If you want to use a condition with tags, you need to add a `"tags":{}` section in the device twin under the same level as properties. For more information about tags in a device twin, see [Understand and use device twins in IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md). For more information about query operations, see [IoT Hub query language operators and IS_DEFINED function](../iot-hub/iot-hub-devguide-query-language.md#operators).
Examples of target conditions:
Examples of target conditions:
* tags.environment = 'prod' OR tags.location = 'westus' * tags.operator = 'John' AND tags.environment = 'prod' AND NOT deviceId = 'linuxprod1' * properties.reported.devicemodel = '4000x'
+* IS_DEFINED(tags.remote)
+* NOT IS_DEFINED(tags.location.building)
+* tags.environment != null
* [none] Consider these constraints when you construct a target condition:
kinect-dk About Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/about-azure-kinect-dk.md
For additional details and information, visit [Speech Service documentation](../
## Vision services
-The following [Azure Cognitive Vision Services](https://azure.microsoft.com/services/cognitive-services/directory/vision/) provide Azure services that can identify and analyze content within images and videos.
+The following [Azure Cognitive Vision Services](https://azure.microsoft.com/products/ai-services?activetab=pivot:visiontab) provide Azure services that can identify and analyze content within images and videos.
-- [Computer vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/)-- [Face](https://azure.microsoft.com/services/cognitive-services/face/)
+- [Computer vision](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-computer-vision/)
+- [Face](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-face-recognition/)
- [Video indexer](https://azure.microsoft.com/services/media-services/video-indexer/) - [Content moderator](https://azure.microsoft.com/services/cognitive-services/content-moderator/) - [Custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/)
kinect-dk Windows Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/windows-comparison.md
The Azure Kinect SDK feature set is different from Kinect for Windows v2, as det
| Body Tracking | BodyFrame | Body Tracking SDK | | | BodyIndexFrame | Body Tracking SDK | | Coordinate Mapping|CoordinateMapper| [Sensor SDK - Image transformations](use-image-transformation.md) |
-|Face Tracking | FaceFrame | [Azure AI
+|Face Tracking | FaceFrame | [Azure AI
| Speech Recognition | N/A | [Azure AI Speech](https://azure.microsoft.com/services/cognitive-services/directory/speech/) | ## Next steps
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
* You can configure IP based and NIC based backend pools for the same load balancer. You canΓÇÖt create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool. * A virtual machine in the same virtual network as an internal load balancer can't access the frontend of the ILB and its backend VMs simultaneously. * Internet routing preference IPs are currently not supported with IP based backend pools. Any Internet routing preference IPs in IP based backend pools will be billed and routed via the default Microsoft global network.
+ * If backend pools are constantly changing (due to the constant addition or removal of backend resources). This may cause reset signals sent back to the source from the backend resource. As a workaround, you can use retries.
>[!Important] > When a backend pool is configured by IP address, it will behave as a Basic Load Balancer with default outbound enabled. For secure by default configuration and applications with demanding outbound needs, configure the backend pool by NIC.
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
$lb | Set-AzLoadBalancer
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-In this example, you create an inbound NAT rule to forward port **500** to backend port **443**.
+In this example, you will create an inbound NAT rule to forward port **500** to backend port **443**. You will then attach the inbound NAT rule to a VM's NIC
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule. Use [az network nic ip-config inbound-nat-rule add](/cli/azure/network/nic/ip-config/inbound-nat-rule) to add the inbound NAT rule to a VM's NIC + ```azurecli az network lb inbound-nat-rule create \ --backend-port 443 \
Use [az network nic ip-config inbound-nat-rule add](/cli/azure/network/nic/ip-co
--ip-config-name MyIpConfig \ --inbound-nat-rule MyNatRule \ --lb-name myLoadBalancer+ ```
logic-apps Create Monitoring Tracking Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-monitoring-tracking-queries.md
To find or filter results based on specific properties or values, you can create
## Next steps
-* [AS2 tracking schemas](../logic-apps/logic-apps-track-integration-account-as2-tracking-schemas.md)
-* [X12 tracking schemas](../logic-apps/logic-apps-track-integration-account-x12-tracking-schema.md)
-* [Custom tracking schemas](../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md)
+* [Tracking schemas for monitoring B2B messages](tracking-schemas-as2-x12-custom.md)
logic-apps Logic Apps Track Integration Account As2 Tracking Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-track-integration-account-as2-tracking-schemas.md
- Title: AS2 tracking schemas for B2B messages
-description: Create tracking schemas to monitor AS2 messages in Azure Logic Apps.
----- Previously updated : 08/20/2022--
-# Create schemas for tracking AS2 messages in Azure Logic Apps
--
-To help you monitor success, errors, and message properties for business-to-business (B2B) transactions, you can use these AS2 tracking schemas in your integration account:
-
-* AS2 message tracking schema
-* AS2 Message Disposition Notification (MDN) tracking schema
-
-## AS2 message tracking schema
-
-```json
-{
- "agreementProperties": {
- "senderPartnerName": "",
- "receiverPartnerName": "",
- "as2To": "",
- "as2From": "",
- "agreementName": ""
- },
- "messageProperties": {
- "direction": "",
- "messageId": "",
- "dispositionType": "",
- "fileName": "",
- "isMessageFailed": "",
- "isMessageSigned": "",
- "isMessageEncrypted": "",
- "isMessageCompressed": "",
- "correlationMessageId": "",
- "incomingHeaders": {},
- "outgoingHeaders": {},
- "isNrrEnabled": "",
- "isMdnExpected": "",
- "mdnType": ""
- }
-}
-```
-
-| Property | Required | Type | Description |
-|-|-||-|
-| senderPartnerName | No | String | AS2 message sender's partner name |
-| receiverPartnerName | No | String | AS2 message receiver's partner name |
-| as2To | Yes | String | AS2 message receiverΓÇÖs name from the headers of the AS2 message |
-| as2From | Yes | String | AS2 message senderΓÇÖs name from the headers of the AS2 message |
-| agreementName | No | String | Name of the AS2 agreement to which the messages are resolved |
-| direction | Yes | String | Direction of the message flow, which is either `receive` or `send` |
-| messageId | No | String | AS2 message ID from the headers of the AS2 message |
-| dispositionType | No | String | Message Disposition Notification (MDN) disposition type value |
-| fileName | No | String | File name from the header of the AS2 message |
-| isMessageFailed | Yes | Boolean | Whether the AS2 message failed |
-| isMessageSigned | Yes | Boolean | Whether the AS2 message was signed |
-| isMessageEncrypted | Yes | Boolean | Whether the AS2 message was encrypted |
-| isMessageCompressed | Yes | Boolean | Whether the AS2 message was compressed |
-| correlationMessageId | No | String | AS2 message ID, to correlate messages with MDNs |
-| incomingHeaders | No | Dictionary of JToken | Incoming AS2 message header details |
-| outgoingHeaders | No | Dictionary of JToken | Outgoing AS2 message header details |
-| isNrrEnabled | Yes | Boolean | Whether to use default value if the value isn't known |
-| isMdnExpected | Yes | Boolean | Whether to use the default value if the value isn't known |
-| mdnType | Yes | Enum | Allowed values: `NotConfigured`, `Sync`, and `Async` |
-|||||
-
-## AS2 MDN tracking schema
-
-```json
-{
- "agreementProperties": {
- "senderPartnerName": "",
- "receiverPartnerName": "",
- "as2To": "",
- "as2From": "",
- "agreementName": ""
- },
- "messageProperties": {
- "direction": "",
- "messageId": "",
- "originalMessageId": "",
- "dispositionType": "",
- "isMessageFailed": "",
- "isMessageSigned": "",
- "isNrrEnabled": "",
- "statusCode": "",
- "micVerificationStatus": "",
- "correlationMessageId": "",
- "incomingHeaders": {
- },
- "outgoingHeaders": {
- }
- }
-}
-```
-
-| Property | Required | Type | Description |
-|-|-||-|
-| senderPartnerName | No | String | AS2 message sender's partner name |
-| receiverPartnerName | No | String | AS2 message receiver's partner name |
-| as2To | Yes | String | Partner name who receives the AS2 message |
-| as2From | Yes | String | Partner name who sends the AS2 message |
-| agreementName | No | String | Name of the AS2 agreement to which the messages are resolved |
-| direction | Yes | String | Direction of the message flow, which is either `receive` or `send` |
-| messageId | No | String | AS2 message ID |
-| originalMessageId | No | String | AS2 original message ID |
-| dispositionType | No | String | MDN disposition type value |
-| isMessageFailed | Yes | Boolean | Whether the AS2 message failed |
-| isMessageSigned | Yes | Boolean | Whether the AS2 message was signed |
-| isNrrEnabled | Yes | Boolean | Whether to use the default value if the value isn't known |
-| statusCode | Yes | Enum | Allowed values: `Accepted`, `Rejected`, and `AcceptedWithErrors` |
-| micVerificationStatus | Yes | Enum | Allowed values:`NotApplicable`, `Succeeded`, and `Failed` |
-| correlationMessageId | No | String | Correlation ID, which is the ID for the original message that has the MDN configured |
-| incomingHeaders | No | Dictionary of JToken | Incoming message header details |
-| outgoingHeaders | No | Dictionary of JToken | Outgoing message header details |
-|||||
-
-## B2B protocol tracking schemas
-
-For information about B2B protocol tracking schemas, see:
-
-* [X12 tracking schemas](logic-apps-track-integration-account-x12-tracking-schema.md)
-* [B2B custom tracking schemas](logic-apps-track-integration-account-custom-tracking-schema.md)
-
-## Next steps
-
-* [Monitor B2B messages with Azure Monitor logs](../logic-apps/monitor-b2b-messages-log-analytics.md)
logic-apps Logic Apps Track Integration Account Custom Tracking Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md
- Title: Custom tracking schemas for B2B messages
-description: Create custom tracking schemas to monitor B2B messages in Azure Logic Apps.
----- Previously updated : 08/20/2022--
-# Create custom tracking schemas that monitor end-to-end workflows in Azure Logic A
-
-Azure Logic Apps has built-in tracking that you can enable for parts of your workflow. However, you can set up custom tracking that logs events from the beginning to the end of workflows, for example, workflows that include a logic app, BizTalk Server, SQL Server, or any other layer. This article provides custom code that you can use in the layers outside of your logic app.
-
-## Custom tracking schema
-
-```json
-{
- "sourceType": "",
- "source": {
- "workflow": {
- "systemId": ""
- },
- "runInstance": {
- "runId": ""
- },
- "operation": {
- "operationName": "",
- "repeatItemScopeName": "",
- "repeatItemIndex": ,
- "trackingId": "",
- "correlationId": "",
- "clientRequestId": ""
- }
- },
- "events": [
- {
- "eventLevel": "",
- "eventTime": "",
- "recordType": "",
- "record": {}
- }
- ]
-}
-```
-
-| Property | Required | Type | Description |
-|-|-||-|
-| sourceType | Yes | String | Type of the run source with these permitted values: `Microsoft.Logic/workflows`, `custom` |
-| source | Yes | String or JToken | If the source type is `Microsoft.Logic/workflows`, the source information needs to follow this schema. If the source type is `custom`, the schema is a JToken. |
-| systemId | Yes | String | Logic app system ID |
-| runId | Yes | String | Logic app run ID |
-| operationName | Yes | String | Name of the operation, for example, action or trigger |
-| repeatItemScopeName | Yes | String | Repeat item name if the action is inside a `foreach`or `until` loop |
-| repeatItemIndex | Yes | Integer | Indicates that the action is inside a `foreach` or `until` loop and is the repeated item index number. |
-| trackingId | No | String | Tracking ID to correlate the messages |
-| correlationId | No | String | Correlation ID to correlate the messages |
-| clientRequestId | No | String | Client can populate this property to correlate messages |
-| eventLevel | Yes | String | Level of the event |
-| eventTime | Yes | DateTime | Time of the event in UTC format: *YYYY-MM-DDTHH:MM:SS.00000Z* |
-| recordType | Yes | String | Type of the track record with this permitted value only: `custom` |
-| record | Yes | JToken | Custom record type with JToken format only |
-|||||
-
-## B2B protocol tracking schemas
-
-For information about B2B protocol tracking schemas, see:
-
-* [AS2 tracking schemas](../logic-apps/logic-apps-track-integration-account-as2-tracking-schemas.md)
-* [X12 tracking schemas](logic-apps-track-integration-account-x12-tracking-schema.md)
-
-## Next steps
-
-* Learn more about [monitoring B2B messages with Azure Monitor logs](../logic-apps/monitor-b2b-messages-log-analytics.md)
logic-apps Tracking Schemas As2 X12 Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tracking-schemas-as2-x12-custom.md
+
+ Title: B2B message monitoring using tracking schemas
+description: Create tracking schemas to monitor B2B messages such as AS2, X12, or custom in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 08/10/2023++
+# Tracking schemas for monitoring B2B messages in Azure Logic Apps
++
+Azure Logic Apps includes built-in tracking that you can enable for parts of your workflow. To help you monitor the successful delivery or receipt, errors, and properties for business-to-business (B2B) messages, you can create and use AS2, X12, and custom tracking schemas in your integration account. This reference guide describes the syntax and attributes for these tracking schemas.
+
+## AS2
+
+- [AS2 message tracking schema](#as2-message)
+- [AS2 Message Disposition Notification (MDN) tracking schema](#as2-mdn)
+
+<a name="as2-message"></a>
+
+### AS2 message tracking schema
+
+The following syntax describes the tracking schema for an AS2 message:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "as2To": "",
+ "as2From": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "messageId": "",
+ "dispositionType": "",
+ "fileName": "",
+ "isMessageFailed": "",
+ "isMessageSigned": "",
+ "isMessageEncrypted": "",
+ "isMessageCompressed": "",
+ "correlationMessageId": "",
+ "incomingHeaders": {},
+ "outgoingHeaders": {},
+ "isNrrEnabled": "",
+ "isMdnExpected": "",
+ "mdnType": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an AS2 message:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | AS2 message sender's partner name |
+| receiverPartnerName | No | String | AS2 message receiver's partner name |
+| as2To | Yes | String | AS2 message receiverΓÇÖs name from the headers of the AS2 message |
+| as2From | Yes | String | AS2 message senderΓÇÖs name from the headers of the AS2 message |
+| agreementName | No | String | Name of the AS2 agreement to which the messages are resolved |
+| direction | Yes | String | Direction of the message flow, which is either `receive` or `send` |
+| messageId | No | String | AS2 message ID from the headers of the AS2 message |
+| dispositionType | No | String | Message Disposition Notification (MDN) disposition type value |
+| fileName | No | String | File name from the header of the AS2 message |
+| isMessageFailed | Yes | Boolean | Whether the AS2 message failed |
+| isMessageSigned | Yes | Boolean | Whether the AS2 message was signed |
+| isMessageEncrypted | Yes | Boolean | Whether the AS2 message was encrypted |
+| isMessageCompressed | Yes | Boolean | Whether the AS2 message was compressed |
+| correlationMessageId | No | String | AS2 message ID, to correlate messages with MDNs |
+| incomingHeaders | No | Dictionary of JToken | Incoming AS2 message header details |
+| outgoingHeaders | No | Dictionary of JToken | Outgoing AS2 message header details |
+| isNrrEnabled | Yes | Boolean | Whether to use default value if the value isn't known |
+| isMdnExpected | Yes | Boolean | Whether to use the default value if the value isn't known |
+| mdnType | Yes | Enum | Allowed values: `NotConfigured`, `Sync`, and `Async` |
+
+<a name="as2-mdn"></a>
+
+### AS2 MDN tracking schema
+
+The following syntax describes the tracking schema for an AS2 MDN message:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "as2To": "",
+ "as2From": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "messageId": "",
+ "originalMessageId": "",
+ "dispositionType": "",
+ "isMessageFailed": "",
+ "isMessageSigned": "",
+ "isNrrEnabled": "",
+ "statusCode": "",
+ "micVerificationStatus": "",
+ "correlationMessageId": "",
+ "incomingHeaders": {
+ },
+ "outgoingHeaders": {
+ }
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an AS2 MDN message:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | AS2 message sender's partner name |
+| receiverPartnerName | No | String | AS2 message receiver's partner name |
+| as2To | Yes | String | Partner name who receives the AS2 message |
+| as2From | Yes | String | Partner name who sends the AS2 message |
+| agreementName | No | String | Name of the AS2 agreement to which the messages are resolved |
+| direction | Yes | String | Direction of the message flow, which is either `receive` or `send` |
+| messageId | No | String | AS2 message ID |
+| originalMessageId | No | String | AS2 original message ID |
+| dispositionType | No | String | MDN disposition type value |
+| isMessageFailed | Yes | Boolean | Whether the AS2 message failed |
+| isMessageSigned | Yes | Boolean | Whether the AS2 message was signed |
+| isNrrEnabled | Yes | Boolean | Whether to use the default value if the value isn't known |
+| statusCode | Yes | Enum | Allowed values: `Accepted`, `Rejected`, and `AcceptedWithErrors` |
+| micVerificationStatus | Yes | Enum | Allowed values:`NotApplicable`, `Succeeded`, and `Failed` |
+| correlationMessageId | No | String | Correlation ID, which is the ID for the original message that has the MDN configured |
+| incomingHeaders | No | Dictionary of JToken | Incoming message header details |
+| outgoingHeaders | No | Dictionary of JToken | Outgoing message header details |
+
+## X12
+
+- [X12 transaction set tracking schema](#x12-transaction-set)
+- [X12 transaction set acknowledgment tracking schema](#x12-transaction-set-acknowledgment)
+- [X12 interchange tracking schema](#x12-interchange)
+- [X12 interchange acknowledgment tracking schema](#x12-interchange-acknowledgment)
+- [X12 functional group tracking schema](#x12-functional-group)
+- [X12 functional group acknowledgment tracking schema](#x12-functional-group-acknowledgment)
+
+<a name="x12-transaction-set"></a>
+
+### X12 transaction set tracking schema
+
+The following syntax describes the tracking schema for an X12 transaction set:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "senderQualifier": "",
+ "senderIdentifier": "",
+ "receiverQualifier": "",
+ "receiverIdentifier": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "interchangeControlNumber": "",
+ "functionalGroupControlNumber": "",
+ "transactionSetControlNumber": "",
+ "CorrelationMessageId": "",
+ "messageType": "",
+ "isMessageFailed": "",
+ "isTechnicalAcknowledgmentExpected": "",
+ "isFunctionalAcknowledgmentExpected": "",
+ "needAk2LoopForValidMessages": "",
+ "segmentsCount": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an X12 transaction set:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | X12 message sender's partner name |
+| receiverPartnerName | No | String | X12 message receiver's partner name |
+| senderQualifier | Yes | String | Send partner qualifier |
+| senderIdentifier | Yes | String | Send partner identifier |
+| receiverQualifier | Yes | String | Receive partner qualifier |
+| receiverIdentifier | Yes | String | Receive partner identifier |
+| agreementName | No | String | Name of the X12 agreement to which the messages are resolved |
+| direction | Yes | Enum | Direction of the message flow, which is either `receive` or `send` |
+| interchangeControlNumber | No | String | Interchange control number |
+| functionalGroupControlNumber | No | String | Functional control number |
+| transactionSetControlNumber | No | String | Transaction set control number |
+| CorrelationMessageId | No | String | Correlation message ID, which is a combination of {AgreementName}{*GroupControlNumber*}{TransactionSetControlNumber} |
+| messageType | No | String | Transaction set or document type |
+| isMessageFailed | Yes | Boolean | Whether the X12 message failed |
+| isTechnicalAcknowledgmentExpected | Yes | Boolean | Whether the technical acknowledgment is configured in the X12 agreement |
+| isFunctionalAcknowledgmentExpected | Yes | Boolean | Whether the functional acknowledgment is configured in the X12 agreement |
+| needAk2LoopForValidMessages | Yes | Boolean | Whether the AK2 loop is required for a valid message |
+| segmentsCount | No | Integer | Number of segments in the X12 transaction set |
+
+<a name="x12-transaction-set-acknowledgment"></a>
+
+### X12 transaction set acknowledgment tracking schema
+
+The following syntax describes the tracking schema for an X12 transaction set acknowledgment:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "senderQualifier": "",
+ "senderIdentifier": "",
+ "receiverQualifier": "",
+ "receiverIdentifier": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "interchangeControlNumber": "",
+ "functionalGroupControlNumber": "",
+ "isaSegment": "",
+ "gsSegment": "",
+ "respondingfunctionalGroupControlNumber": "",
+ "respondingFunctionalGroupId": "",
+ "respondingtransactionSetControlNumber": "",
+ "respondingTransactionSetId": "",
+ "statusCode": "",
+ "processingStatus": "",
+ "CorrelationMessageId": "",
+ "isMessageFailed": "",
+ "ak2Segment": "",
+ "ak3Segment": "",
+ "ak5Segment": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an X12 transaction set acknowledgment:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | X12 message sender's partner name |
+| receiverPartnerName | No | String | X12 message receiver's partner name |
+| senderQualifier | Yes | String | Send partner qualifier |
+| senderIdentifier | Yes | String | Send partner identifier |
+| receiverQualifier | Yes | String | Receive partner qualifier |
+| receiverIdentifier | Yes | String | Receive partner identifier |
+| agreementName | No | String | Name of the X12 agreement to which the messages are resolved |
+| direction | Yes | Enum | Direction of the message flow, which is either `receive` or `send` |
+| interchangeControlNumber | No | String | Interchange control number of the functional acknowledgment. The value populates only for the send side where functional acknowledgment is received for the messages sent to partner. |
+| functionalGroupControlNumber | No | String | Functional group control number of the functional acknowledgment. The value populates only for the send side where functional acknowledgment is received for the messages sent to partner |
+| isaSegment | No | String | ISA segment of the message. The value populates only for the send side where functional acknowledgment is received for the messages sent to partner |
+| gsSegment | No | String | GS segment of the message. The value populates only for the send side where functional acknowledgment is received for the messages sent to partner |
+| respondingfunctionalGroupControlNumber | No | String | The responding interchange control number |
+| respondingFunctionalGroupId | No | String | The responding functional group ID, which maps to AK101 in the acknowledgment |
+| respondingtransactionSetControlNumber | No | String | The responding transaction set control number |
+| respondingTransactionSetId | No | String | The responding transaction set ID, which maps to AK201 in the acknowledgment |
+| statusCode | Yes | Boolean | Transaction set acknowledgment status code |
+| segmentsCount | Yes | Enum | Acknowledgment status code with these permitted values: `Accepted`, `Rejected`, and `AcceptedWithErrors` |
+| processingStatus | Yes | Enum | Processing status of the acknowledgment with these permitted values: `Received`, `Generated`, and `Sent` |
+| CorrelationMessageId | No | String | Correlation message ID, which is a combination of {AgreementName}{*GroupControlNumber*}{TransactionSetControlNumber} |
+| isMessageFailed | Yes | Boolean | Whether the X12 message failed |
+| ak2Segment | No | String | Acknowledgment for a transaction set within the received functional group |
+| ak3Segment | No | String | Reports errors in a data segment |
+| ak5Segment | No | String | Reports whether the transaction set identified in the AK2 segment is accepted or rejected, and why |
+
+<a name="x12-interchange"></a>
+
+### X12 interchange tracking schema
+
+The following syntax describes the tracking schema for an X12 interchange:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "senderQualifier": "",
+ "senderIdentifier": "",
+ "receiverQualifier": "",
+ "receiverIdentifier": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "interchangeControlNumber": "",
+ "isaSegment": "",
+ "isTechnicalAcknowledgmentExpected": "",
+ "isMessageFailed": "",
+ "isa09": "",
+ "isa10": "",
+ "isa11": "",
+ "isa12": "",
+ "isa14": "",
+ "isa15": "",
+ "isa16": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an X12 interchange:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | X12 message sender's partner name |
+| receiverPartnerName | No | String | X12 message receiver's partner name |
+| senderQualifier | Yes | String | Send partner qualifier |
+| senderIdentifier | Yes | String | Send partner identifier |
+| receiverQualifier | Yes | String | Receive partner qualifier |
+| receiverIdentifier | Yes | String | Receive partner identifier |
+| agreementName | No | String | Name of the X12 agreement to which the messages are resolved |
+| direction | Yes | Enum | Direction of the message flow, which is either `receive` or `send` |
+| interchangeControlNumber | No | String | Interchange control number |
+| isaSegment | No | String | Message ISA segment |
+| isTechnicalAcknowledgmentExpected | Boolean | Whether the technical acknowledgment is configured in the X12 agreement |
+| isMessageFailed | Yes | Boolean | Whether the X12 message failed |
+| isa09 | No | String | X12 document interchange date |
+| isa10 | No | String | X12 document interchange time |
+| isa11 | No | String | X12 interchange control standards identifier |
+| isa12 | No | String | X12 interchange control version number |
+| isa14 | No | String | X12 acknowledgment is requested |
+| isa15 | No | String | Indicator for test or production |
+| isa16 | No | String | Element separator |
+
+<a name="x12-interchange-acknowledgment"></a>
+
+### X12 interchange acknowledgment tracking schema
+
+The following syntax describes the tracking schema for an X12 interchange acknowledgment:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "senderQualifier": "",
+ "senderIdentifier": "",
+ "receiverQualifier": "",
+ "receiverIdentifier": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "interchangeControlNumber": "",
+ "isaSegment": "",
+ "respondingInterchangeControlNumber": "",
+ "isMessageFailed": "",
+ "statusCode": "",
+ "processingStatus": "",
+ "ta102": "",
+ "ta103": "",
+ "ta105": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an X12 interchange acknowledgment:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | X12 message sender's partner name |
+| receiverPartnerName | No | String | X12 message receiver's partner name |
+| senderQualifier | Yes | String | Send partner qualifier |
+| senderIdentifier | Yes | String | Send partner identifier |
+| receiverQualifier | Yes | String | Receive partner qualifier |
+| receiverIdentifier | Yes | String | Receive partner identifier |
+| agreementName | No | String | Name of the X12 agreement to which the messages are resolved |
+| direction | Yes | Enum | Direction of the message flow, which is either `receive` or `send` |
+| interchangeControlNumber | No | String | Interchange control number of the technical acknowledgment that's received from partners |
+| isaSegment | No | String | ISA segment for the technical acknowledgment that's received from partners |
+| respondingInterchangeControlNumber | No | String | Interchange control number for the technical acknowledgment that's received from partners |
+| isMessageFailed | Yes | Boolean | Whether the X12 message failed |
+| statusCode | Yes | Enum | Interchange acknowledgment status code with these permitted values: `Accepted`, `Rejected`, and `AcceptedWithErrors` |
+| processingStatus | Yes | Enum | Acknowledgment status with these permitted values: `Received`, `Generated`, and `Sent` |
+| ta102 | No | String | Interchange date |
+| ta103 | No | String | Interchange time |
+| ta105 | No | String | Interchange note code |
+
+<a name="x12-functional-group"></a>
+
+### X12 functional group tracking schema
+
+The following syntax describes the tracking schema for an X12 functional group:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "senderQualifier": "",
+ "senderIdentifier": "",
+ "receiverQualifier": "",
+ "receiverIdentifier": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "interchangeControlNumber": "",
+ "functionalGroupControlNumber": "",
+ "gsSegment": "",
+ "isTechnicalAcknowledgmentExpected": "",
+ "isFunctionalAcknowledgmentExpected": "",
+ "isMessageFailed": "",
+ "gs01": "",
+ "gs02": "",
+ "gs03": "",
+ "gs04": "",
+ "gs05": "",
+ "gs07": "",
+ "gs08": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an X12 functional group:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | X12 message sender's partner name |
+| receiverPartnerName | No | String | X12 message receiver's partner name |
+| senderQualifier | Yes | String | Send partner qualifier |
+| senderIdentifier | Yes | String | Send partner identifier |
+| receiverQualifier | Yes | String | Receive partner qualifier |
+| receiverIdentifier | Yes | String | Receive partner identifier |
+| agreementName | No | String | The name of the X12 agreement to which the messages are resolved |
+| direction | Yes | Enum | Direction of the message flow, either receive or send |
+| interchangeControlNumber | No | String | Interchange control number |
+| functionalGroupControlNumber | No | String | Functional control number |
+| gsSegment | No | String | Message GS segment |
+| isTechnicalAcknowledgmentExpected | Yes | Boolean | Whether the technical acknowledgment is configured in the X12 agreement |
+| isFunctionalAcknowledgmentExpected | Yes | Boolean | Whether the functional acknowledgment is configured in the X12 agreement |
+| isMessageFailed | Yes | Boolean | Whether the X12 message failed |
+| gs01 | No | String | Functional identifier code |
+| gs02 | No | String | Application sender's code |
+| gs03 | No | String | Application receiver's code |
+| gs04 | No | String | Functional group date |
+| gs05 | No | String | Functional group time |
+| gs07 | No | String | Responsible agency code |
+| gs08 | No | String | Identifier code for the version, release, or industry |
+
+<a name="x12-functional-group-acknowledgment"></a>
+
+### X12 functional group acknowledgment tracking schema
+
+The following syntax describes the tracking schema for an X12 functional group acknowledgment:
+
+```json
+{
+ "agreementProperties": {
+ "senderPartnerName": "",
+ "receiverPartnerName": "",
+ "senderQualifier": "",
+ "senderIdentifier": "",
+ "receiverQualifier": "",
+ "receiverIdentifier": "",
+ "agreementName": ""
+ },
+ "messageProperties": {
+ "direction": "",
+ "interchangeControlNumber": "",
+ "functionalGroupControlNumber": "",
+ "isaSegment": "",
+ "gsSegment": "",
+ "respondingfunctionalGroupControlNumber": "",
+ "respondingFunctionalGroupId": "",
+ "isMessageFailed": "",
+ "statusCode": "",
+ "processingStatus": "",
+ "ak903": "",
+ "ak904": "",
+ "ak9Segment": ""
+ }
+}
+```
+
+The following table describes the attributes in a tracking schema for an X12 functional group acknowledgment:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| senderPartnerName | No | String | X12 message sender's partner name |
+| receiverPartnerName | No | String | X12 message receiver's partner name |
+| senderQualifier | Yes | String | Send partner qualifier |
+| senderIdentifier | Yes | String | Send partner identifier |
+| receiverQualifier | Yes | String | Receive partner qualifier |
+| receiverIdentifier | Yes | String | Receive partner identifier |
+| agreementName | No | String | Name of the X12 agreement to which the messages are resolved |
+| direction | Yes | Enum | Direction of the message flow, which is either `receive` or `send` |
+| interchangeControlNumber | No | String | Interchange control number, which populates for the send side when a technical acknowledgment is received from partners |
+| functionalGroupControlNumber | No | String | Functional group control number of the technical acknowledgment, which populates for the send side when a technical acknowledgment is received from partners |
+| isaSegment | No | String | Same as interchange control number, but populated only in specific cases |
+| gsSegment | No | String | Same as functional group control number, but populated only in specific cases |
+| respondingfunctionalGroupControlNumber | No | String | Control number of the original functional group |
+| respondingFunctionalGroupId | No | String | Maps to AK101 in the acknowledgment functional group ID |
+| isMessageFailed | Boolean | Whether the X12 message failed |
+| statusCode | Yes | Enum | Acknowledgment status code with these permitted values: `Accepted`, `Rejected`, and `AcceptedWithErrors` |
+| processingStatus | Yes | Enum | Processing status of the acknowledgment with these permitted values: `Received`, `Generated`, and `Sent` |
+| ak903 | No | String | Number of transaction sets received |
+| ak904 | No | String | Number of transaction sets accepted in the identified functional group |
+| ak9Segment | No | String | Whether the functional group identified in the AK1 segment is accepted or rejected, and why |
+
+<a name="custom"></a>
+
+## Custom
+
+You can set up custom tracking that logs events from the start to the end of your logic app workflow. For example, you can log events from layers that include your workflow, SQL Server, BizTalk Server, or any other layer. The following section provides custom tracking schema code that you can use in the layers outside your workflow.
+
+```json
+{
+ "sourceType": "",
+ "source": {
+ "workflow": {
+ "systemId": ""
+ },
+ "runInstance": {
+ "runId": ""
+ },
+ "operation": {
+ "operationName": "",
+ "repeatItemScopeName": "",
+ "repeatItemIndex": ,
+ "trackingId": "",
+ "correlationId": "",
+ "clientRequestId": ""
+ }
+ },
+ "events": [
+ {
+ "eventLevel": "",
+ "eventTime": "",
+ "recordType": "",
+ "record": {}
+ }
+ ]
+}
+```
+
+The following table describes the attributes in a custom tracking schema:
+
+| Property | Required | Type | Description |
+|-|-||-|
+| sourceType | Yes | String | Type of the run source with these permitted values: `Microsoft.Logic/workflows`, `custom` |
+| source | Yes | String or JToken | If the source type is `Microsoft.Logic/workflows`, the source information needs to follow this schema. If the source type is `custom`, the schema is a JToken. |
+| systemId | Yes | String | Logic app system ID |
+| runId | Yes | String | Logic app run ID |
+| operationName | Yes | String | Name of the operation, for example, action or trigger |
+| repeatItemScopeName | Yes | String | Repeat item name if the action is inside a `foreach`or `until` loop |
+| repeatItemIndex | Yes | Integer | Indicates that the action is inside a `foreach` or `until` loop and is the repeated item index number. |
+| trackingId | No | String | Tracking ID to correlate the messages |
+| correlationId | No | String | Correlation ID to correlate the messages |
+| clientRequestId | No | String | Client can populate this property to correlate messages |
+| eventLevel | Yes | String | Level of the event |
+| eventTime | Yes | DateTime | Time of the event in UTC format: *YYYY-MM-DDTHH:MM:SS.00000Z* |
+| recordType | Yes | String | Type of the track record with this permitted value only: `custom` |
+| record | Yes | JToken | Custom record type with JToken format only |
+
+## Next steps
+
+* [Monitor B2B messages with Azure Monitor logs](../logic-apps/monitor-b2b-messages-log-analytics.md)
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
Azure Machine Learning checks and validates any machine learning packages that m
Main updates provided with each image version are described in the below sections.
+## June 30, 2023
+Version: `23.06.30`
+
+Main changes:
+
+- `Azure Machine Learning SDK` to version `1.51.0`
+- Purged vulnerable packages
+- Fixed `libtinfo` error
+- Fixed 'conda command not found' error
+
+Main environment specific updates:
+
+- `tensorflow` updated to `2.11.1` in `azureml_py38_PT_TF`
+- `azure-keyvault-keys` updated to `4.8.0` in `azureml_py38`
+ ## April 7, 2023 Version: `23.04.07`
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
You don't have to pick one or the other. For example, you can use a managed virt
Azure Machine Learning uses a variety of compute resources and data stores on the Azure platform. To learn more about how each of these supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md).
-## Data exfiltration prevention (preview)
+## Data exfiltration prevention
Azure Machine Learning has several inbound and outbound network dependencies. Some of these dependencies can expose a data exfiltration risk by malicious agents within your organization. These risks are associated with the outbound requirements to Azure Storage, Azure Front Door, and Azure Monitor. For recommendations on mitigating this risk, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
machine-learning How To Collect Production Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-collect-production-data.md
First, you'll need to add custom logging code to your scoring script (`score.py`
> [!NOTE] > Currently, only pandas DataFrames can be logged with the `collect()` API. If the data is not in a DataFrame when passed to `collect()`, it will not be logged to storage and an error will be reported.
-The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK:
+The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK. In this example, a third `Collector` called `inputs_outputs_collector` logs a joined DataFrame of the `model_inputs` and the `model_outputs`. This joined DataFrame enables additional monitoring signals (feature attribution drift, etc.). If you are not interested in those monitoring signals, please feel free to remove this `Collector`.
```python import pandas as pd
import json
from azureml.ai.monitoring import Collector def init():
- global inputs_collector, outputs_collector
+ global inputs_collector, outputs_collector, inputs_outputs_collector
# instantiate collectors with appropriate names, make sure align with deployment spec inputs_collector = Collector(name='model_inputs')
data_collector:
enabled: 'True' model_outputs: enabled: 'True'
+ model_inputs_outputs:
+ enabled: 'True'
```
-The following code is an example of a comprehensive deployment YAML for a managed online endpoint deployment. You should update the deployment YAML according to your scenario.
+The following code is an example of a comprehensive deployment YAML for a managed online endpoint deployment. You should update the deployment YAML according to your scenario. For more examples on how to format your deployment YAML for inference data logging, see [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/data-collector](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/data-collector).
```yml $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
data_collector:
enabled: 'True' model_outputs: enabled: 'True'
+ model_inputs_outputs:
+ enabled: 'True'
``` Optionally, you can adjust the following additional parameters for your `data_collector`:
After enabling data collection, production inference data will be logged to your
To learn how to monitor the performance of your models with the collected production inference data, see the following articles:
+- [What is Azure Machine Learning model monitoring?](concept-model-monitoring.md)
+- [Monitor performance of models deployed to production](how-to-monitor-model-performance.md)
- [What are Azure Machine Learning endpoints?](concept-endpoints.md)
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
The task has four inputs: `Service Connection`, `Azure Resource Group Name`, `Az
# We are saving the name of azureMl job submitted in previous step to a variable and it will be used as an inut to the AzureML Job Wait task azureml_job_name_from_submit_job: $[ dependencies.SubmitAzureMLJob.outputs['submit_azureml_job_task.AZUREML_JOB_NAME'] ] steps:
- - task: AzureMLJobWaitTask@0
+ - task: AzureMLJobWaitTask@1
inputs: serviceConnection: $(service-connection) resourceGroupName: $(resource-group)
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
Previously updated : 02/09/2023 Last updated : 08/11/2023 monikerRange: 'azureml-api-2 || azureml-api-1' - # Export or delete your Machine Learning service workspace data
-In Azure Machine Learning, you can export or delete your workspace data using either the portal graphical interface or the Python SDK. This article describes both options.
+In Azure Machine Learning, you can export or delete your workspace data with either the portal graphical interface or the Python SDK. This article describes both options.
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-dsr-and-stp-note.md)]
In Azure Machine Learning, you can export or delete your workspace data using ei
## Control your workspace data
-In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete data with Azure Machine Learning studio, the CLI, and the SDK. Additionally, you can access telemetry data through the Azure Privacy portal.
+The in-product data that Azure Machine Learning stores is available for export and deletion. You can export and delete data with Azure Machine Learning studio, the CLI, and the SDK. Additionally, you can access telemetry data through the Azure Privacy portal.
In Azure Machine Learning, personal data consists of user information in job history documents.
+An Azure workspace relies on a **resource group** to hold the related resources for an Azure solution. When you create a workspace, you have the opportunity to use an existing resource group, or to create a new one. See [this page](../azure-resource-manager/management/manage-resource-groups-portal.md) to learn more about Azure resource groups.
+ ## Delete high-level resources using the portal When you create a workspace, Azure creates several resources within the resource group:
When you create a workspace, Azure creates several resources within the resource
- An Applications Insights instance - A key vault
-To delete these resources, selecting them from the list and choose **Delete**:
+To delete these resources, select them from the list, and choose **Delete**:
> [!IMPORTANT] > If the resource is configured for soft delete, the data won't actually delete unless you optionally select to delete the resource permanently. For more information, see the following articles:
To delete these resources, selecting them from the list and choose **Delete**:
> * [Azure log analytics workspace](../azure-monitor/logs/delete-workspace.md). > * [Azure Key Vault soft-delete](../key-vault/general/soft-delete-overview.md). +
+A confirmation dialog box opens, where you can confirm your choices.
-Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in `/azureml` subfolders. You can download and delete the data from the portal.
+Job history documents might contain personal user information. These documents are stored in the storage account in blob storage, in `/azureml` subfolders. You can download and delete the data from the portal.
## Export and delete machine learning resources using Azure Machine Learning studio
-Azure Machine Learning studio provides a unified view of your machine learning resources - for example, notebooks, data assets, models, and jobs. Azure Machine Learning studio emphasizes preservation of a record of your data and experiments. You can delete computational resources such as pipelines and compute resources with the browser. For these resources, navigate to the resource in question and choose **Delete**.
+Azure Machine Learning studio provides a unified view of your machine learning resources - for example, notebooks, data assets, models, and jobs. Azure Machine Learning studio emphasizes preservation of a record of your data and experiments. You can delete computational resources - pipelines and compute resources - right in the browser. For these resources, navigate to the resource in question, and choose **Delete**.
You can unregister data assets and archive jobs, but these operations don't delete the data. To entirely remove the data, data assets and job data require deletion at the storage level. Storage level deletion happens in the portal, as described earlier. Azure Machine Learning studio can handle individual deletion. Job deletion deletes the data of that job.
Azure Machine Learning studio can handle training artifact downloads from experi
To download a registered model, navigate to the **Model** and choose **Download**. :::moniker range="azureml-api-1" ## Export and delete resources using the Python SDK
The following machine learning resources can be deleted using the Python SDK:
## Next steps
-Learn more about [Managing a workspace](how-to-manage-workspace.md).
+Learn more about [Managing a workspace](how-to-manage-workspace.md).
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-label-data.md
Previously updated : 02/10/2023 Last updated : 08/11/2023
After your project administrator creates an Azure Machine Learning [image data l
## Prerequisites * A [Microsoft account](https://account.microsoft.com/account), or an Azure Active Directory account, for the organization and project.
-* Contributor-level access to the workspace containing the labeling project.
+* Contributor-level access to the workspace that contains the labeling project.
## Sign in to the studio
After your project administrator creates an Azure Machine Learning [image data l
1. Select the subscription and the workspace containing the labeling project. Your project administrator has this information.
-1. You may see multiple sections on the left, depending on your access level. If you do, select **Data labeling** on the left-hand side to find the project.
+1. You may notice multiple sections on the left, depending on your access level. If you do, select **Data labeling** on the left-hand side to find the project.
## Understand the labeling task In the data labeling project table, select the **Label data** link for your project.
-You see instructions, specific to your project. They explain the type of data involved, how you should make your decisions, and other relevant information. Read the information, and select **Tasks** at the top of the page. You can also select **Start labeling** at the bottom of the page.
+You'll see instructions, specific to your project. They explain the type of data involved, how you should make your decisions, and other relevant information. Read the information, and select **Tasks** at the top of the page. You can also select **Start labeling** at the bottom of the page.
## Selecting a label
Machine learning algorithms may be triggered during your labeling. If your proje
* Images
- * After some amount of data is labeled, you might see **Tasks clustered** at the top of your screen, next to the project name. Images are grouped together to present similar images on the same page. If so, switch to one of the multiple image views to take advantage of the grouping.
+ * After some amount of data is labeled, you might notice **Tasks clustered** at the top of your screen, next to the project name. Images are grouped together to present similar images on the same page. If you notice this, switch to one of the multiple image views to take advantage of the grouping.
- * Later on, you might see **Tasks prelabeled** next to the project name. Items appear with a suggested label produced by a machine learning classification model. No machine learning model has 100% accuracy. While we only use data for which the model is confident, these data values might still have incorrect pre-labels. When you see labels, correct any wrong labels before you submit the page.
+ * Later on, you might notice **Tasks prelabeled** next to the project name. Items appear with a suggested label produced by a machine learning classification model. No machine learning model has 100% accuracy. While we only use data for which the model has confidence, these data values might still have incorrect prelabels. When you notice labels, correct any wrong labels before you submit the page.
- * For object identification models, you may see bounding boxes and labels already present. Correct all mistakes with them before you submit the page.
+ * For object identification models, you may notice bounding boxes and labels already present. Correct all mistakes with them before you submit the page.
- * For segmentation models, you may see polygons and labels already present. Correct all mistakes with them before you submit the page.
+ * For segmentation models, you may notice polygons and labels already present. Correct all mistakes with them before you submit the page.
* Text * You may eventually see **Tasks prelabeled** next to the project name. Items appear with a suggested label that a machine learning classification model produces. No machine learning model has 100% accuracy. While we only use data for which the model is confident, these data values might still be incorrectly prelabeled. When you see labels, correct any wrong labels before submitting the page.
-Early in a labeling project, the machine learning model may only have enough accuracy to pre-label a small image subset. Once these images are labeled, the labeling project will return to manual labeling to gather more data for the next model training round. Over time, the model will become more confident about a higher proportion of images. Later in the project, its confidence results in more pre-label tasks.
+Early in a labeling project, the machine learning model may only have enough accuracy to prelabel a small image subset. Once these images are labeled, the labeling project will return to manual labeling to gather more data for the next model training round. Over time, the model will become more confident about a higher proportion of images. Later in the project, its confidence results in more prelabel tasks.
-When there are no more pre-labeled tasks, you stop confirming or correcting labels, and go back to manual item tagging.
+When there are no more prelabeled tasks, you stop confirming or correcting labels, and go back to manual item tagging.
-## <a name="image-tasks"></a> Image tasks
+## Image tasks
For image-classification tasks, you can choose to view multiple images simultaneously. Use the icons above the image area to select the layout.
While you label the medical images with the same tools as any other images, you
Assign a single tag to the entire image for an "Image Classification Multi-Class" project type. To review the directions at any time, go to the **Instructions** page, and select **View detailed instructions**.
-If you realize that you made a mistake after you assign a tag to an image, you can fix it. Select the "**X**" on the label displayed below the image, to clear the tag. You can also select the image and choose another class. The newly selected value replaces the previously applied tag.
+If you realize that you made a mistake after you assign a tag to an image, you can fix it. Select the "**X**" on the label displayed below the image to clear the tag. You can also select the image and choose another class. The newly selected value replaces the previously applied tag.
## Tag images for multi-label classification
Azure will only enable the **Submit** button after you apply at least one tag to
## Tag images and specify bounding boxes for object detection
-If your project is of type "Object Identification (Bounding Boxes)," you specify one or more bounding boxes in the image, and apply a tag to each box. Images can have multiple bounding boxes, each with a single tag. Use **View detailed instructions** to determine if your project uses multiple bounding boxes.
+If your project is of type "Object Identification (Bounding Boxes)," specify one or more bounding boxes in the image, and apply a tag to each box. Images can have multiple bounding boxes, each with a single tag. Use **View detailed instructions** to determine if your project uses multiple bounding boxes.
1. Select a tag for the bounding box you plan to create. 1. Select the **Rectangular box** tool ![Rectangular box tool](./media/how-to-label-data/rectangular-box-tool.png), or select "R."
By default, you can edit existing bounding boxes. The **Lock/unlock regions** to
Use the **Regions manipulation** tool ![This is the regions manipulation tool icon - four arrows pointing outward from the center, up, right, down, and left.](./media/how-to-label-data/regions-tool.png), or "M", to adjust an existing bounding box. Drag the edges or corners to adjust the shape. Select in the interior if you want to drag the whole bounding box. If you can't edit a region, you probably toggled the **Lock/unlock regions** tool.
-Use the **Template-based box** tool ![Template-box tool](./media/how-to-label-data/template-box-tool.png), or "T", to create multiple bounding boxes of the same size. If the image has no bounding boxes, and you activate template-based boxes, the tool produces 50-by-50-pixel boxes. If you create a bounding box, and then activate template-based boxes, the size of any new bounding boxes matches the size of the last box that you created. Template-based boxes can be resized after placement. Resizing a template-based box only resizes that particular box.
+Use the **Template-based box** tool ![Template-box tool](./media/how-to-label-data/template-box-tool.png), or "T", to create multiple bounding boxes of the same size. If the image has no bounding boxes, and you activate template-based boxes, the tool produces 50-by-50-pixel boxes. If you create a bounding box, and then activate template-based boxes, the size of any new bounding boxes matches the size of the last box that you created. You can resize template-based boxes after placement. Resizing a template-based box only resizes that particular box.
To delete *all* bounding boxes in the current image, select the **Delete all regions** tool ![Delete regions tool](./media/how-to-label-data/delete-regions-tool.png).
After you create the polygons for an image, select **Submit** to save your work,
## Label text
-When tagging text, use the toolbar to:
+When you tag text, use the toolbar to:
* Increase or decrease the text size * Change the font * Skip labeling this item and move to the next item
-If you realize that you made a mistake after you assign a tag, you can fix it. Select the "**X**" on the label that's displayed below the text to clear the tag.
+If you notice that you made a mistake after you assign a tag, you can fix it. Select the "**X**" on the label that's displayed below the text to clear the tag.
There are three text project types:
Once you tag all the items in an entry, select **Submit** to move to the next en
When you submit a page of tagged data, Azure assigns new unlabeled data to you from a work queue. If there's no more unlabeled data available, a new message says so, along with a link to the portal home page.
-When you finish labeling, select your image inside a circle in the upper-right corner of the studio, and then select **sign-out**. If you don't sign out, eventually Azure will "time you out" and assign your data to another labeler.
+When you finish labeling, select your image inside a circle in the upper-right corner of the studio, and then select **sign-out**. If you don't sign out, Azure times you out and assigns your data to another labeler.
## Next steps
-* Learn to [train image classification models in Azure](./tutorial-train-deploy-notebook.md)
+* Learn to [train image classification models in Azure](./tutorial-train-deploy-notebook.md)
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
You manage the Azure Machine Learning compute quota on your subscription separat
## Request quota increases
-To raise the limit or VM quota above the default limit, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/) at no charge.
-
-You can't raise limits above the maximum values shown in the preceding tables. If there's no maximum limit, you can't adjust the limit for the resource.
-
-When you're requesting a quota increase, select the service that you have in mind. For example, select Machine Learning Service, Container Instances, or Storage. For Azure Machine Learning endpoint, you can select the **Request Quota** button while viewing the quota in the preceding steps.
-
-1. Scroll to **Machine Learning Service: Virtual Machine Quota**.
-
- :::image type="content" source="./media/how-to-manage-quotas/virtual-machine-quota.png" lightbox="./media/how-to-manage-quotas/virtual-machine-quota.png" alt-text="Screenshot of the VM quota details.":::
-
-2. Under **Additional Details** specify the request details with the number of additional vCPUs required to run your Machine Learning Endpoint.
-
- :::image type="content" source="./media/how-to-manage-quotas/vm-quota-request-additional-info.png" lightbox="./media/how-to-manage-quotas/vm-quota-request-additional-info.png" alt-text="Screenshot of the VM quota additional details.":::
-
-> [!NOTE]
-> [Free trial subscriptions](https://azure.microsoft.com/offers/ms-azr-0044p) are not eligible for limit or quota increases. If you have a free trial subscription, you can upgrade to a [pay-as-you-go](https://azure.microsoft.com/offers/ms-azr-0003p/) subscription. For more information, see [Upgrade Azure free trial to pay-as-you-go](../cost-management-billing/manage/upgrade-azure-subscription.md) and [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq).
+To raise the limit for Azure Machine Learning VM quota above the default limit, you can request for quota increase from the above **Usage + quotas** view or submit a quota increase request from Azure Machine Learning studio.
### Endpoint quota increases
-When requesting the quota increase, provide the following information:
+To raise endpoint quota, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/). When requesting for quota increase, provide the following information:
-1. When opening the support request, select __Machine Learning Service: Endpoint Limits__ as the __Quota type__.
+1. When opening the support request, select __Service and subscription limits (quotas)__ as the __Issue type__.
+2. Select the subscription of your choice
+3. Select __Machine Learning Service: Endpoint Limits__ as the __Quota type__.
1. On the __Additional details__ tab, select __Enter details__ and then provide the quota you'd like to increase and the new value, the reason for the quota increase request, and __location(s)__ where you need the quota increase. Finally, select __Save and continue__ to continue. :::image type="content" source="./media/how-to-manage-quotas/quota-details.png" lightbox="./media/how-to-manage-quotas/quota-details.png" alt-text="Screenshot of the endpoint quota details form.":::
machine-learning How To Monitor Model Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-model-performance.md
created_monitor = poller.result()
-## Set up model monitoring for models deployed outside of Azure Machine Learning
+## Set up model monitoring by bringing your own production data to Azure Machine Learning
-You can also set up model monitoring for models deployed to Azure Machine Learning batch endpoints or deployed outside of Azure Machine Learning. To monitor these models, you must meet the following requirements:
+You can also set up model monitoring for models deployed to Azure Machine Learning batch endpoints or deployed outside of Azure Machine Learning. If you have production data but no deployment, you can use the data to perform continuous model monitoring. To monitor these models, you must meet the following requirements:
* You have a way to collect production inference data from models deployed in production. * You can register the collected production inference data as an Azure Machine Learning data asset, and ensure continuous updates of the data.
You can also set up model monitoring for models deployed to Azure Machine Learni
| input | input_data | uri_folder | The collected production inference data, which is registered as Azure Machine Learning data asset. | azureml:myproduction_inference_data:1 | | output | preprocessed_data | mltable | A tabular dataset, which matches a subset of baseline data schema. | | - # [Azure CLI](#tab/azure-cli) Once you've satisfied the previous requirements, you can set up model monitoring with the following CLI command and YAML definition:
created_monitor = poller.result()
The studio currently doesn't support monitoring for models that are deployed outside of Azure Machine Learning. See the Azure CLI or Python tabs instead. +
+## Set up model monitoring with custom signals and metrics
+
+With Azure Machine Learning model monitoring, you have the option to define your own custom signal and implement any metric of your choice to monitor your model. You can register this signal as an Azure Machine Learning component. When your Azure Machine Learning model monitoring job runs on the specified schedule, it will compute the metric(s) you have defined within your custom signal, just as it does for the prebuilt signals (data drift, prediction drift, data quality, & feature attribution drift). To get started with defining your own custom signal, you must meet the following requirement:
+
+* You must define your custom signal and register it as an Azure Machine Learning component. The Azure Machine Learning component must have these input and output signatures:
+
+### Component input signature
+
+The component input DataFrame should contain a `mltable` with the processed data from the preprocessing component and any number of literals, each representing an implemented metric as part of the custom signal component. For example, if you have implemented one metric, `std_deviation`, then you will need an input for `std_deviation_threshold`. Generally, there should be one input per metric with the name {metric_name}_threshold.
+
+ | signature name | type | description | example value |
+ |||||
+ | production_data | mltable | A tabular dataset, which matches a subset of baseline data schema. | |
+ | std_deviation_threshold | literal, string | Respective threshold for the implemented metric. | 2 |
+
+### Component output signature
+
+The component output DataFrame should contain four columns: `group`, `metric_name`, `metric_value`, and `threshold_value`:
+
+ | signature name | type | description | example value |
+ |||||
+ | group | literal, string | Top level logical grouping to be applied to this custom metric. | TRANSACTIONAMOUNT |
+ | metric_name | literal, string | The name of the custom metric. | std_deviation |
+ | metric_value | mltable | The value of the custom metric. | 44,896.082 |
+ | threshold_value | | The threshold for the custom metric. | 2 |
+
+Here is an example output from a custom signal component computing the metric, `std_deviation`:
+
+ | group | metric_value | metric_name | threshold_value |
+ |||||
+ | TRANSACTIONAMOUNT | 44,896.082 | std_deviation | 2 |
+ | LOCALHOUR | 3.983 | std_deviation | 2 |
+ | TRANSACTIONAMOUNTUSD | 54,004.902 | std_deviation | 2 |
+ | DIGITALITEMCOUNT | 7.238 | std_deviation | 2 |
+ | PHYSICALITEMCOUNT | 5.509 | std_deviation | 2 |
+
+An example custom signal component definition and metric computation code can be found in our GitHub repo at [https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_signal](https://github.com/Azure/azureml-examples/tree/main/cli/monitoring/components/custom_signal).
+
+# [Azure CLI](#tab/azure-cli)
+
+Once you've satisfied the previous requirements, you can set up model monitoring with the following CLI command and YAML definition:
+
+```azurecli
+az ml schedule create -f ./custom-monitoring.yaml
+```
+
+The following YAML contains the definition for model monitoring with a custom signal. It is assumed that you have already created and registered your component with the custom signal definition to Azure Machine Learning. In this example, the `component_id` of the registered custom signal component is `azureml:my_custom_signal:1.0.0`:
+
+```yaml
+# custom-monitoring.yaml
+$schema: http://azureml/sdk-2-0/Schedule.json
+name: my-custom-signal
+trigger:
+ type: recurrence
+ frequency: day # can be minute, hour, day, week, month
+ interval: 7 # #every day
+create_monitor:
+ compute:
+ instance_type: "standard_e8s_v3"
+ runtime_version: "3.2"
+ monitoring_signals:
+ customSignal:
+ type: custom
+ data_window_size: 360
+ component_id: azureml:my_custom_signal:1.0.0
+ input_datasets:
+ production_data:
+ input_dataset:
+ type: uri_folder
+ path: azureml:custom_without_drift:1
+ dataset_context: test
+ pre_processing_component: azureml:custom_preprocessor:1.0.0
+ metric_thresholds:
+ - metric_name: std_dev
+ threshold: 2
+ alert_notification:
+ emails:
+ - abc@example.com
+```
+
+# [Python](#tab/python)
+
+The Python SDK currently doesn't support monitoring for custom signals. See the Azure CLI tab instead.
+
+# [Studio](#tab/azure-studio)
+
+The studio currently doesn't support monitoring for custom signals. See the Azure CLI tab instead.
+++ ## Next steps - [Data collection from models in production (preview)](concept-data-collection.md)
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
To create a job, a standalone Spark job can be defined as a YAML specification f
- `runtime_version` - defines the Spark runtime version. The following Spark runtime versions are currently supported: - `3.1` - `3.2`
+ - `3.3`
> [!IMPORTANT]
- >
- > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
+ > Azure Synapse Runtime for Apache Spark: Announcements
+ > * Azure Synapse Runtime for Apache Spark 3.1:
+ > * End of Life (EOLA) Announcement Date: January 26, 2023
+ > * End of Support Date: July 31, 2023. After this date, the runtime will be disabled.
+ > * Azure Synapse Runtime for Apache Spark 3.2:
+ > * EOLA Announcement Date: July 8, 2023
+ > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
+ > * For continued support and optimal performance, we advise migrating to Apache Spark 3.3.
An example is shown here: ```yaml resources: instance_type: standard_e8s_v3
- runtime_version: "3.2"
+ runtime_version: "3.3"
``` - `compute` - this property defines the name of an attached Synapse Spark pool, as shown in this example: ```yaml
identity:
resources: instance_type: standard_e4s_v3
- runtime_version: "3.2"
+ runtime_version: "3.3"
``` > [!NOTE]
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `runtime_version` - a key that defines the Spark runtime version. The following Spark runtime versions are currently supported: - `3.1.0` - `3.2.0`
+ - `3.3.0`
> [!IMPORTANT]
- >
- > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
+ > Azure Synapse Runtime for Apache Spark: Announcements
+ > * Azure Synapse Runtime for Apache Spark 3.1:
+ > * End of Life (EOLA) Announcement Date: January 26, 2023
+ > * End of Support Date: July 31, 2023. After this date, the runtime will be disabled.
+ > * Azure Synapse Runtime for Apache Spark 3.2:
+ > * EOLA Announcement Date: July 8, 2023
+ > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
+ > * For continued support and optimal performance, we advise migrating to Apache Spark 3.3.
+ - `compute` - the name of an attached Synapse Spark pool. - `inputs` - the inputs for the Spark job. This parameter should pass a dictionary with mappings of the input data bindings used in the job. This dictionary has these values: - a dictionary key defines the input name
spark_job = spark(
executor_instances=2, resources={ "instance_type": "Standard_E8S_V3",
- "runtime_version": "3.2.0",
+ "runtime_version": "3.3.0",
}, inputs={ "titanic_data": Input(
To submit a standalone Spark job using the Azure Machine Learning studio UI:
2. If you selected **Spark serverless**: 1. Select **Virtual machine size**. 2. Select **Spark runtime version**.
- > [!IMPORTANT]
- >
- > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
+ > [!IMPORTANT]
+ > Azure Synapse Runtime for Apache Spark: Announcements
+ > * Azure Synapse Runtime for Apache Spark 3.1:
+ > * End of Life (EOLA) Announcement Date: January 26, 2023
+ > * End of Support Date: July 31, 2023. After this date, the runtime will be disabled.
+ > * Azure Synapse Runtime for Apache Spark 3.2:
+ > * EOLA Announcement Date: July 8, 2023
+ > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
+ > * For continued support and optimal performance, we advise migrating to Apache Spark 3.3.
3. If you selected **Attached compute**: 1. Select an attached Synapse Spark pool from the **Select Azure Machine Learning attached compute** menu. 4. Select **Next**.
def spark_pipeline(spark_input_data):
spark_step.identity = ManagedIdentityConfiguration() spark_step.resources = { "instance_type": "Standard_E8S_V3",
- "runtime_version": "3.2.0",
+ "runtime_version": "3.3.0",
} pipeline = spark_pipeline(
machine-learning How To Use Openai Models In Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-openai-models-in-azure-ml.md
> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article, you learn how to discover, finetune and deploy Azure Open AI models at scale, using Azure Machine Learning.
+In this article, you learn how to discover, finetune and deploy Azure OpenAI models at scale, using Azure Machine Learning.
## Prerequisites-- [You must have access](../ai-services/openai/overview.md#how-do-i-get-access-to-azure-openai) to the Azure Open AI service
+- [You must have access](../ai-services/openai/overview.md#how-do-i-get-access-to-azure-openai) to the Azure OpenAI Service
- You must be in an Azure OpenAI service [supported region](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability) ## What is OpenAI Models in Azure Machine Learning?
In recent years, advancements in AI have led to the rise of large foundation mod
- Deploying Azure OpenAI Models with Azure Machine Learning to the Azure OpenAI service ## Access Azure OpenAI models in Azure Machine Learning
-The model catalog (preview) in Azure Machine Learning studio is your starting point to explore various collections of foundation models. The Azure Open AI models collection is a collection of models, exclusively available on Azure. These models enable customers to access prompt engineering, finetuning, evaluation, and deployment capabilities for large language models available in Azure OpenAI Service. You can view the complete list of supported OpenAI models in the [model catalog](https://ml.azure.com/model/catalog), under the `Azure OpenAI Service` collection.
+The model catalog (preview) in Azure Machine Learning studio is your starting point to explore various collections of foundation models. The Azure OpenAI models collection is a collection of models, exclusively available on Azure. These models enable customers to access prompt engineering, finetuning, evaluation, and deployment capabilities for large language models available in Azure OpenAI Service. You can view the complete list of supported OpenAI models in the [model catalog](https://ml.azure.com/model/catalog), under the `Azure OpenAI Service` collection.
> [!TIP] >Supported OpenAI models are published to the AzureML Model Catalog. View a complete list of [Azure OpenAI models](../ai-services/openai/concepts/models.md).
You might receive any of the following errors when you try to deploy an Azure Op
- **Fix**: Azure OpenAI failed to create. This is due to Quota issues, make sure you have enough quota for the deployment. - **Failed to fetch Azure OpenAI deployments**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Model Not Deployable** - **Fix**: This usually happens while trying to deploy a GPT-4 model. Due to high demand you need to [apply for access to use GPT-4 models](/azure/ai-services/openai/concepts/models#gpt-4-models).
machine-learning Reference Yaml Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-monitor.md
As the data used to train the model evolves in production, the distribution of t
| `target_dataset.dataset` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `target_dataset.dataset.input_dataset` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `target_dataset.dataset.dataset_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs` | |
-| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `target_dataset.data_window_size` | Integer |**Optional**. Data window size in days. This is the production data window to be computed for data drift. | By default the data window size is the last monitoring period. | | | `baseline_dataset` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `baseline_dataset.input_dataset` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `baseline_dataset.dataset_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `training`, `test`, `validation` | |
-| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `features` | Object | **Optional**. Target features to be monitored for data drift. Some models might have hundreds or thousands of features, it's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default `features.top_n_feature_importance = 10` if `baseline_dataset.dataset_context` is `training`, otherwise, default is `all_features` | | `data_segment` | Object | **Optional**. Description of specific data segment to be monitored for data drift. | | | | `data_segment.feature_name` | String | The name of feature used to filter for data segment. | | |
Prediction drift tracks changes in the distribution of a model's prediction outp
| `target_dataset.dataset` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `target_dataset.dataset.input_dataset` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `target_dataset.dataset.dataset_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_outputs` | |
-| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `target_dataset.data_window_size` | Integer | **Optional**. Data window size in days. This is the production data window to be computed for prediction drift. | By default the data window size is the last monitoring period.| | | `baseline_dataset` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `baseline_dataset.input_dataset` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `baseline_dataset.dataset_context` | String | The context of data, it refers to the context that dataset come from. | `model_inputs`, `model_outputs`, `test`, `validation` | | | `baseline_dataset.target_column_name` | String | The name of target column. | | |
-| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `alert_notification` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_notification` is on, user will receive alert notification. | | By default, the object contains `numerical` metric `population_stability_index` with threshold of `0.02` and `categorical` metric `normalized_wasserstein_distance` with threshold of `0.02`| |`metric_thresholds.applicable_feature_type` | String | Feature type that the metric will be applied to. | `numerical` or `categorical`| |
Data quality signal tracks data quality issues in production by comparing to tra
| `target_dataset.dataset` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `target_dataset.dataset.input_dataset` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `target_dataset.dataset.dataset_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs`, `model_outputs` | |
-| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `target_dataset.data_window_size` | Integer | **Optional**. Data window size in days. This is the production data window to be computed for data quality issues. | By default the data window size is the last monitoring period.| | | `baseline_dataset` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `baseline_dataset.input_dataset` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `baseline_dataset.dataset_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `model_outputs`, `training`, `test`, `validation` | |
-| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `features` | Object | **Optional**. Target features to be monitored for data quality. Some models might have hundreds or thousands of features. It's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default to `features.top_n_feature_importance = 10` if `baseline_dataset.dataset_context` is `training`, otherwise default is `all_features` | | `alert_notification` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_notification` is on, user will receive alert notification. | |By default, the object contains following `numerical` and ` categorical` metrics: `null_value_rate`, `data_type_error_rate`, and `out_of_bounds_rate` |
The feature attribution of a model may change over time due to changes in the di
| `target_dataset.dataset` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `target_dataset.dataset.input_dataset` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `target_dataset.dataset.dataset_context` | String | The context of data. It refers to production model inputs data. | `model_inputs` | |
-| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `target_dataset.dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `target_dataset.dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `target_dataset.lookback_period_days` | Integer |Lookback window to include extra data in current monitoring run, this is useful if you want model monitoring to run more frequently but the production data within monitoring period isn't enough or skewed. | | | | `baseline_dataset` | Object | **Required**. It must be `training` data. | | | | `baseline_dataset.input_dataset` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `baseline_dataset.dataset_context` | String | The context of data, it refers to the context that dataset was used before. | `training` | |
-| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-for-models-deployed-outside-of-azure-machine-learning). | | |
+| `baseline_dataset.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `baseline_dataset.input_dataset.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-own-production-data-to-azure-machine-learning). | | |
| `alert_notification` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_notification` is on, user will receive alert notification. | | By default, the object contains `normalized_discounted_cumulative_gain` metric with threshold of `0.02`| |`metric_thresholds.applicable_feature_type` | String | Feature type that the metric will be applied to. | `all_feature_types` | `all feature_types` |
machine-learning Tutorial Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md
The steps you'll take are:
[!INCLUDE [notebook set kernel](includes/prereq-set-kernel.md)]
+> [!NOTE]
+>- Serverless Spark Compute does not have `Python 3.10 - SDK v2` installed by default. It is recommended that users create a compute instance and select it before proceeding with the tutorial.
+ <!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/deploy-model.ipynb -->
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-save-write-experiment-files.md
For this reason, we recommend:
### Storage limits of experiment snapshots
-For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the job. This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
+For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the job. For a pipeline, the directory is configured for each step.
+
+This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
```Python While attempting to take snapshot of .
mariadb Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-audit-logs.md
Title: Audit logs - Azure Database for MariaDB description: Describes the audit logs available in Azure Database for MariaDB, and the available parameters for enabling logging levels. --++ Last updated 06/24/2022
mariadb Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for MariaDB description: Learn about Azure Advisor recommendations for MariaDB. --++ Last updated 06/24/2022
mariadb Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-backup.md
Title: Backup and restore - Azure Database for MariaDB description: Learn about automatic backups and restoring your Azure Database for MariaDB server. --++ Last updated 06/24/2022
mariadb Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-business-continuity.md
Title: Business continuity - Azure Database for MariaDB description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MariaDB service. --++ Last updated 06/24/2022
mariadb Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-compatibility.md
Title: Drivers and tools compatibility - Azure Database for MariaDB description: This article describes the MariaDB drivers and management tools that are compatible with Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
Title: Connectivity architecture - Azure Database for MariaDB description: Describes the connectivity architecture for your Azure Database for MariaDB server.--++ Last updated 06/24/2022
mariadb Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity.md
Title: Transient connectivity errors - Azure Database for MariaDB
description: Learn how to handle transient connectivity errors for Azure Database for MariaDB. keywords: mysql connection,connection string,connectivity issues,transient error,connection error --++ Last updated 06/24/2022
mariadb Concepts Data Access Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-access-security-vnet.md
Title: VNet service endpoints - Azure Database for MariaDB description: 'Describes how VNet service endpoints work for your Azure Database for MariaDB server.' --++ Last updated 06/24/2022
mariadb Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-firewall-rules.md
Title: Firewall rules - Azure Database for MariaDB description: Learn about using firewall rules to enable connections to your Azure Database for MariaDB server. --++ Last updated 06/24/2022
mariadb Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-limits.md
Title: Limitations - Azure Database for MariaDB description: This article describes limitations in Azure Database for MariaDB, such as number of connection and storage engine options. --++ Last updated 06/24/2022
mariadb Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-monitoring.md
Title: Monitoring - Azure Database for MariaDB description: This article describes the metrics for monitoring and alerting for Azure Database for MariaDB, including CPU, storage, and connection statistics. --++ Last updated 06/24/2022
mariadb Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for MariaDB description: This article describes the Planned maintenance notification feature in Azure Database for MariaDB --++ Last updated 06/24/2022
mariadb Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-pricing-tiers.md
Title: Pricing tiers - Azure Database for MariaDB description: Learn about the various pricing tiers for Azure Database for MariaDB including compute generations, storage types, storage size, vCores, memory, and backup retention periods. --++ Last updated 06/24/2022
mariadb Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for MariaDB description: This article describes the Query Performance Insight feature in Azure Database for MariaDB --++ Last updated 06/24/2022
mariadb Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-read-replicas.md
Title: Read replicas - Azure Database for MariaDB description: 'Learn about read replicas in Azure Database for MariaDB: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.' --++ Last updated 06/24/2022
mariadb Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-security.md
Title: Security - Azure Database for MariaDB description: An overview of the security features in Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-server-logs.md
Title: Slow query logs - Azure Database for MariaDB description: Describes the logs available in Azure Database for MariaDB, and the available parameters for enabling different logging levels. --++ Last updated 06/24/2022
mariadb Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-server-parameters.md
Title: Server parameters - Azure Database for MariaDB description: This topic provides guidelines for configuring server parameters in Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-servers.md
Title: Servers - Azure Database for MariaDB description: This topic provides considerations and guidelines for working with Azure Database for MariaDB servers. --++ Last updated 06/24/2022
mariadb Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-supported-versions.md
Title: Supported versions - Azure Database for MariaDB description: Learn which versions of the MariaDB server are supported in the Azure Database for MariaDB service. --++ Last updated 06/24/2022
mariadb Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/connect-workbench.md
Title: 'Quickstart: Connect MySQL Workbench - Azure Database for MariaDB' description: This quickstart provides the steps to use MySQL Workbench to connect to and query data from Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Howto Alert Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-alert-metric.md
Title: Configure metric alerts - Azure portal - Azure Database for MariaDB description: This article describes how to configure and access metric alerts for Azure Database for MariaDB from the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-cli.md
Title: Auto grow storage - Azure CLI - Azure Database for MariaDB description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for MariaDB description: This article describes how you can enable auto grow storage for Azure Database for MariaDB using Azure portal --++ Last updated 06/24/2022
mariadb Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-powershell.md
Title: Auto grow storage - Azure PowerShell - Azure Database for MariaDB description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-cli.md
Title: Access audit logs - Azure CLI - Azure Database for MariaDB description: This article describes how to configure and access the audit logs in Azure Database for MariaDB from the Azure CLI. --++ Last updated 06/24/2022
mariadb Howto Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-portal.md
Title: Access audit logs - Azure portal - Azure Database for MariaDB description: This article describes how to configure and access the audit logs in Azure Database for MariaDB from the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Configure Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-logs-cli.md
Title: Access slow query logs - Azure CLI - Azure Database for MariaDB description: This article describes how to access the slow logs in Azure Database for MariaDB by using the Azure CLI command-line utility. --++ ms.devlang: azurecli
mariadb Howto Configure Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-logs-portal.md
Title: Access slow query logs - Azure portal - Azure Database for MariaDB description: This article describes how to configure and access the slow query logs in Azure Database for MariaDB from the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-cli.md
Title: Configure server parameters - Azure CLI - Azure Database for MariaDB description: This article describes how to configure the service parameters in Azure Database for MariaDB using the Azure CLI command line utility. --++ ms.devlang: azurecli
mariadb Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-using-powershell.md
Title: Configure Azure Database for MariaDB - Azure PowerShell description: This article describes how to configure the service parameters in Azure Database for MariaDB using PowerShell. --++ ms.devlang: azurepowershell Last updated 06/24/2022
mariadb Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-ssl.md
Title: Configure SSL - Azure Database for MariaDB description: Instructions for how to properly configure Azure Database for MariaDB and associated applications to correctly use SSL connections --++ Last updated 04/19/2023 ms.devlang: csharp, golang, java, php, python, ruby
mariadb Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-connection-string-powershell.md
Title: Generate a connection string with PowerShell - Azure Database for MariaDB description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Howto Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-connection-string.md
Title: Connection strings - Azure Database for MariaDB description: This document lists the currently supported connection strings for applications to connect with Azure Database for MariaDB, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby. --++ Last updated 06/24/2022
mariadb Howto Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-create-manage-server-portal.md
Title: Manage server - Azure portal - Azure Database for MariaDB description: Learn how to manage an Azure Database for MariaDB server from the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-create-users.md
Title: Create users - Azure Database for MariaDB description: This article describes how you can create new user accounts to interact with an Azure Database for MariaDB server. --++ Last updated 06/24/2022
mariadb Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-data-in-replication.md
Title: Configure data-in Replication - Azure Database for MariaDB
description: This article describes how to set up Data-in Replication in Azure Database for MariaDB. --++ Last updated 04/19/2023
mariadb Howto Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-firewall-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for MariaDB description: This article describes how to create and manage Azure Database for MariaDB firewall rules using Azure CLI command-line. --++ ms.devlang: azurecli
mariadb Howto Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-firewall-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for MariaDB description: Create and manage Azure Database for MariaDB firewall rules using the Azure portal --++ Last updated 06/24/2022
mariadb Howto Manage Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-cli.md
Title: Manage VNet endpoints - Azure CLI - Azure Database for MariaDB description: This article describes how to create and manage Azure Database for MariaDB VNet service endpoints and rules using Azure CLI command line. --++ ms.devlang: azurecli
mariadb Howto Manage Vnet Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-portal.md
Title: Manage VNet endpoints - Azure portal - Azure Database for MariaDB description: Create and manage Azure Database for MariaDB VNet service endpoints and rules using the Azure portal --++ Last updated 06/24/2022
mariadb Howto Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-migrate-dump-restore.md
Title: Migrate with dump and restore - Azure Database for MariaDB description: This article explains two common ways to back up and restore databases in your Azure database for MariaDB by using tools such as mysqldump, MySQL Workbench, and phpMyAdmin. --++
mariadb Howto Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-move-regions-portal.md
Title: Move Azure regions - Azure portal - Azure Database for MariaDB description: Move an Azure Database for MariaDB server from one Azure region to another using a read replica and the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-cli.md
Title: Manage read replicas - Azure CLI, REST API - Azure Database for MariaDB description: This article describes how to set up and manage read replicas in Azure Database for MariaDB using the Azure CLI and REST API. --++ Last updated 06/24/2022
mariadb Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for MariaDB description: This article describes how to set up and manage read replicas in Azure Database for MariaDB using the portal --++ Last updated 06/24/2022
mariadb Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md
Title: Manage Azure Database for MariaDB read replicas description: Learn how to set up and manage read replicas in Azure Database for MariaDB using PowerShell in the General Purpose or Memory Optimized pricing tiers. --++ Last updated 06/24/2022
mariadb Howto Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-redirection.md
Title: Connect with redirection - Azure Database for MariaDB
description: This article describes how you can configure your application to connect to Azure Database for MariaDB with redirection. --++ Last updated 04/19/2023
mariadb Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-cli.md
Title: Restart server - Azure CLI - Azure Database for MariaDB description: This article describes how you can restart an Azure Database for MariaDB server using the Azure CLI. --++ Last updated 06/24/2022
mariadb Howto Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for MariaDB description: This article describes how you can restart an Azure Database for MariaDB server using the Azure Portal. --++ Last updated 06/24/2022
mariadb Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-powershell.md
Title: Restart Azure Database for MariaDB server - Azure PowerShell description: Learn how you can restart an Azure Database for MariaDB server using PowerShell. The time required for a restart depends on the MariaDB recovery process. --++ Last updated 06/24/2022
mariadb Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-dropped-server.md
Title: Restore a deleted Azure Database for MariaDB server description: This article describes how to restore a deleted server in Azure Database for MariaDB using the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-cli.md
Title: Backup and restore - Azure CLI - Azure Database for MariaDB description: Learn how to backup and restore a server in Azure Database for MariaDB by using the Azure CLI. --++ ms.devlang: azurecli
mariadb Howto Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-portal.md
Title: Backup and restore - Azure portal - Azure Database for MariaDB description: This article describes how to restore a server in Azure Database for MariaDB using the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-powershell.md
Title: Backup and restore - Azure PowerShell - Azure Database for MariaDB description: Learn how to backup and restore a server in Azure Database for MariaDB by using Azure PowerShell. --++ ms.devlang: azurepowershell Last updated 06/24/2022
mariadb Howto Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-server-parameters.md
Title: Configure server parameters - Azure portal - Azure Database for MariaDB description: This article describes how to configure MariaDB server parameters in Azure Database for MariaDB using the Azure portal. --++ Last updated 06/24/2022
mariadb Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-common-connection-issues.md
Title: Troubleshoot connection issues - Azure Database for MariaDB description: Learn how to troubleshoot connection issues to Azure Database for MariaDB, including transient errors requiring retries, firewall issues, and outages. --++ Last updated 06/24/2022
mariadb Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-query-performance.md
Title: Troubleshoot query performance - Azure Database for MariaDB description: Learn how to use EXPLAIN to troubleshoot query performance in Azure Database for MariaDB. --++ Last updated 06/24/2022
mariadb Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/overview.md
Title: Overview - Azure Database for MariaDB description: Learn about the Azure Database for MariaDB service, a relational database service in the Microsoft cloud based on the MariaDB community edition. --++ Last updated 06/24/2022
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB
description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. --++ Last updated 08/08/2023
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
Title: 'Quickstart: Create an Azure Database for MariaDB - ARM template' description: In this Quickstart article, learn how to create an Azure Database for MariaDB server by using an Azure Resource Manager template. --++ Last updated 06/24/2022
mariadb Quickstart Create Mariadb Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MariaDB' description: This quickstart describes how to use the Azure CLI to create an Azure Database for MariaDB server in an Azure resource group. --++ ms.devlang: azurecli Last updated 06/24/2022
mariadb Quickstart Create Mariadb Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-portal.md
Title: 'Quickstart: Create a server - Azure portal - Azure Database for MariaDB' description: This article shows you how to use the Azure portal to quickly create a sample Azure Database for MariaDB server in about five minutes.--++ Last updated 06/24/2022
mariadb Quickstart Create Mariadb Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-powershell.md
Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for MariaDB' description: This quickstart describes how to use PowerShell to create an Azure Database for MariaDB server in an Azure resource group.--++ Last updated 06/24/2022
mariadb Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/reference-stored-procedures.md
Title: Management stored procedures - Azure Database for MariaDB description: Learn which stored procedures in Azure Database for MariaDB are useful to help you configure data-in replication, set the timezone, and kill queries. --++ Last updated 06/24/2022
mariadb Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/sample-scripts-azure-cli.md
Title: Azure CLI samples - Azure Database for MariaDB | Microsoft Docs description: This article lists the Azure CLI code samples available for interacting with Azure Database for MariaDB. --++ ms.devlang: azurecli
mariadb Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-change-server-configuration.md
Title: CLI script - Change server parameters - Azure Database for MariaDB description: This sample CLI script lists all available server configurations and updates of an Azure Database for MariaDB. --++ ms.devlang: azurecli
mariadb Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md
Title: CLI script - Create server - Azure Database for MariaDB description: This sample CLI script creates an Azure Database for MariaDB server and configures a server-level firewall rule. --++ ms.devlang: azurecli
mariadb Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md
Title: CLI script - Create server with vNet rule - Azure Database for MariaDB description: This sample CLI script creates an Azure Database for MariaDB server with a service endpoint on a virtual network and configures a vNet rule. --++ ms.devlang: azurecli
mariadb Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-point-in-time-restore.md
Title: CLI script - Restore server - Azure Database for MariaDB description: This sample Azure CLI script shows how to restore an Azure Database for MariaDB server and its databases to a previous point in time. --++ ms.devlang: azurecli
mariadb Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-scale-server.md
Title: CLI script - Scale server - Azure Database for MariaDB description: This sample CLI script scales Azure Database for MariaDB server to a different performance level after querying the metrics. --++ ms.devlang: azurecli
mariadb Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-server-logs.md
Title: CLI script - Download slow query logs - Azure Database for MariaDB description: This sample Azure CLI script shows how to enable and download the slow query logs of an Azure Database for MariaDB server. --++ ms.devlang: azurecli
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
Last updated 08/03/2023 --++ # Azure Policy Regulatory Compliance controls for Azure Database for MariaDB
mariadb Tutorial Design Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-cli.md
Title: 'Tutorial: Design an Azure Database for MariaDB - Azure CLI' description: This tutorial explains how to create and manage Azure Database for MariaDB server and database using Azure CLI from the command line. --++ ms.devlang: azurecli Last updated 06/24/2022
mariadb Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-portal.md
Title: 'Tutorial: Design an Azure Database for MariaDB - Azure portal' description: This tutorial explains how to create and manage an Azure Database for MariaDB server and database by using the Azure portal. --++ Last updated 06/24/2022
mariadb Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-powershell.md
Title: 'Tutorial: Design a server - Azure PowerShell - Azure Database for MariaDB' description: This tutorial explains how to create and manage Azure Database for MariaDB server and database using PowerShell. --++ ms.devlang: azurepowershell Last updated 06/24/2022
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md
Only the administrator based on an Azure AD account can create the first Azure A
Methods of authentication for accessing the flexible server include: - MySQL authentication only - This is the default option. Only the native MySQL authentication with a MySQL sign-in and password can be used to access the flexible server.-- Only Azure AD authentication - MySQL native authentication is disabled, and users are able to authenticate using only their Azure AD user and token. To enable this mode, the server parameter **aad_auth_only** is set to _enabled_.-- Authentication with MySQL and Azure AD - Both native MySQL authentication and Azure AD authentication are supported. To enable this mode, the server parameter **aad_auth_only** is set to _disabled_.
+- Only Azure AD authentication - MySQL native authentication is disabled, and users are able to authenticate using only their Azure AD user and token. To enable this mode, the server parameter **aad_auth_only** is set to _**ON**_.
+
+- Authentication with MySQL and Azure AD - Both native MySQL authentication and Azure AD authentication are supported. To enable this mode, the server parameter **aad_auth_only** is set to _**OFF**_.
## Permissions
Once you authenticate against the Active Directory, you retrieve a token. This t
## Next steps - To learn how to configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL - Flexible Server](how-to-azure-ad.md)++
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-workbench.md
Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL - Flexible Server' description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL - Flexible Server.--++
mysql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-cli.md
Title: Manage virtual networks - Azure CLI - Azure Database for MySQL - Flexible Server description: Create and manage virtual networks for Azure Database for MySQL - Flexible Server using the Azure CLI--++
mysql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-portal.md
Title: Manage virtual networks - Azure portal - Azure Database for MySQL - Flexible Server description: Create and manage virtual networks for Azure Database for MySQL - Flexible Server using the Azure portal--++
mysql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-create-users.md
Title: How to create users for Azure Database for MySQL description: This article describes how to create new user accounts to interact with an Azure Database for MySQL server.--++ Last updated 03/29/2023
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
Title: CLI script - Change server parameters - Azure Database for MySQL description: This sample CLI script lists all available server configurations and updates the value of innodb_lock_wait_timeout.--++ ms.devlang: azurecli
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
Title: CLI script - Create server - Azure Database for MySQL description: This sample CLI script creates an Azure Database for MySQL server and configures a server-level firewall rule.--++ ms.devlang: azurecli
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
Title: CLI script - Restore server - Azure Database for MySQL description: This sample Azure CLI script shows how to restore an Azure Database for MySQL server and its databases to a previous point in time.--++ ms.devlang: azurecli
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
Title: CLI script - Scale server - Azure Database for MySQL description: This sample CLI script scales Azure Database for MySQL server to a different performance level after querying the metrics.--++ ms.devlang: azurecli
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
Title: CLI script - Download slow query logs - Azure Database for MySQL description: This sample Azure CLI script shows how to enable and download the server logs of an Azure Database for MySQL server.--++ ms.devlang: azurecli
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-aks.md
description: Learn about connecting Azure Kubernetes Service with Azure Database
--++ Last updated 06/20/2022
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-audit-logs.md
description: Describes the audit logs available in Azure Database for MySQL, and
--++ Last updated 06/20/2022
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md
description: Learn about the concepts of Azure Active Directory for authenticati
--++ Last updated 06/20/2022
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for MySQL
description: Learn about Azure Advisor recommendations for MySQL. --++ Last updated 06/20/2022
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md
description: Learn about automatic backups and restoring your Azure Database for
--++ Last updated 06/20/2022
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-business-continuity.md
description: Learn about business continuity (point-in-time restore, data center
--++ Last updated 06/20/2022
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md
description: This article describes the MySQL drivers and management tools that
--++ Last updated 06/20/2022
mysql Concepts Connect To A Gateway Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md
Title: Azure Database for MySQL managing updates and upgrades description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL service.--++
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md
description: This article lists each library or driver that client programs can
--++ Last updated 06/20/2022
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity.md
keywords: mysql connection,connection string,connectivity issues,transient error
--++ Last updated 06/20/2022
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-and-security-vnet.md
description: 'Describes how VNet service endpoints work for your Azure Database
--++ Last updated 06/20/2022
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md
description: Learn about using Data-in Replication to synchronize from an extern
--++ Last updated 06/20/2022
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-database-application-development.md
description: Introduces design considerations that a developer should follow whe
--++ Last updated 06/20/2022
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-firewall-rules.md
description: Learn about using firewall rules to enable connections to your Azur
--++ Last updated 06/20/2022
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-high-availability.md
description: This article provides information on high availability in Azure Dat
--++ Last updated 06/20/2022
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-limits.md
description: This article describes limitations in Azure Database for MySQL, suc
--++ Last updated 06/20/2022
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-performance-recommendations.md
description: This article describes the Performance Recommendation feature in Az
--++ Last updated 06/20/2022
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-planned-maintenance-notification.md
description: This article describes the Planned maintenance notification feature
--++ Last updated 06/20/2022
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md
description: Learn about the various service tiers for Azure Database for MySQL
--++ Last updated 06/20/2022
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-performance-insight.md
description: This article describes the Query Performance Insight feature in Azu
--++ Last updated 06/20/2022
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md
description: Learn about the Query Store feature in Azure Database for MySQL to
--++ Last updated 06/20/2022
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md
description: 'Learn about read replicas in Azure Database for MySQL: choosing re
--++ Last updated 06/20/2022
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-security.md
description: An overview of the security features in Azure Database for MySQL.
--++ Last updated 06/20/2022
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-logs.md
description: Describes the slow query logs available in Azure Database for MySQL
--++ Last updated 06/20/2022
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-servers.md
description: This topic provides considerations and guidelines for working with
--++ Last updated 06/20/2022
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md
ms.devlang: cpp--++ Last updated 06/20/2022
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md
ms.devlang: csharp--++ Last updated 06/20/2022
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md
description: This quickstart provides several Go code samples you can use to con
--++ ms.devlang: golang Last updated 05/03/2023
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
ms.devlang: javascript--++ Last updated 05/03/2023
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md
description: This quickstart provides several PHP code samples you can use to co
--++ Last updated 06/20/2022
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md
Title: 'Quickstart: Connect using Python - Azure Database for MySQL'
description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL. --++ ms.devlang: python
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md
Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL'
description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL. --++ ms.devlang: ruby
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-alert-on-metric.md
Title: Configure metric alerts - Azure portal - Azure Database for MySQL
description: This article describes how to configure and access metric alerts for Azure Database for MySQL from the Azure portal. --++ Last updated 06/20/2022
mysql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md
description: This article describes how you can enable auto grow storage using t
--++ Last updated 06/20/2022
mysql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for MySQL
description: This article describes how you can enable auto grow storage for Azure Database for MySQL using Azure portal --++ Last updated 06/20/2022
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
description: This article describes how you can enable auto grow storage using P
--++ Last updated 06/20/2022
mysql How To Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md
Title: Access audit logs - Azure CLI - Azure Database for MySQL
description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure CLI. --++ Last updated 06/20/2022
mysql How To Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-portal.md
Title: Access audit logs - Azure portal - Azure Database for MySQL
description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure portal. --++ Last updated 06/20/2022
mysql How To Configure Server Logs In Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-cli.md
description: This article describes how to access the slow query logs in Azure D
--++ ms.devlang: azurecli Last updated 06/20/2022
mysql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-cli.md
Title: Configure server parameters - Azure CLI - Azure Database for MySQL
description: This article describes how to configure the service parameters in Azure Database for MySQL using the Azure CLI command line utility. --++ ms.devlang: azurecli Last updated 06/20/2022
mysql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md
Title: Configure server parameters - Azure PowerShell - Azure Database for MySQL
description: This article describes how to configure the service parameters in Azure Database for MySQL using PowerShell. --++ ms.devlang: azurepowershell Last updated 06/20/2022
mysql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
description: Learn about how to set up Azure Active Directory (Azure AD) for aut
--++ Last updated 06/20/2022
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
Title: Configure SSL - Azure Database for MySQL
description: Instructions for how to properly configure Azure Database for MySQL and associated applications to correctly use SSL connections --++ ms.devlang: csharp, golang, java, javascript, php, python, ruby
mysql How To Connect Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-webapp.md
Title: Connect to Azure App Service - Azure Database for MySQL
description: Instructions for how to properly connect an existing Azure App Service to Azure Database for MySQL --++ Last updated 06/20/2022
mysql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md
description: Learn about how to connect and authenticate using Managed Identity
--++ Last updated 05/03/2023
mysql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string-powershell.md
Title: Generate a connection string with PowerShell - Azure Database for MySQL
description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MySQL. --++ Last updated 06/20/2022
mysql How To Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string.md
Title: Connection strings - Azure Database for MySQL
description: This document lists the currently supported connection strings for applications to connect with Azure Database for MySQL, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby. --++ Last updated 06/20/2022
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md
description: This article describes how to set up Data-in Replication for Azure
--++ Last updated 05/03/2023
mysql How To Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for MySQL
description: This article describes how to create and manage Azure Database for MySQL firewall rules using Azure CLI command-line. --++ ms.devlang: azurecli Last updated 06/20/2022
mysql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for MySQL
description: Create and manage Azure Database for MySQL firewall rules using the Azure portal --++ Last updated 06/20/2022
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-single-server-cli.md
description: Learn how to manage an Azure Database for MySQL server from the Azu
--++ Last updated 06/20/2022
mysql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-online.md
Title: Minimal-downtime migration - Azure Database for MySQL
description: This article describes how to perform a minimal-downtime migration of a MySQL database to Azure Database for MySQL. --++ Last updated 06/20/2022
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-workbench.md
Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using the MySQL Workbench Migration Wizard.--++
mysql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-move-regions-portal.md
Title: Move Azure regions - Azure portal - Azure Database for MySQL
description: Move an Azure Database for MySQL server from one Azure region to another using a read replica and the Azure portal. --++ Last updated 06/20/2022
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-cli.md
Title: Manage read replicas - Azure CLI, REST API - Azure Database for MySQL
description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure CLI or REST API. --++ Last updated 06/20/2022
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for MySQL
description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure portal. --++ Last updated 06/20/2022
mysql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md
Title: Manage read replicas - Azure PowerShell - Azure Database for MySQL
description: Learn how to set up and manage read replicas in Azure Database for MySQL using PowerShell. --++ Last updated 06/20/2022
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
Title: Connect with redirection - Azure Database for MySQL description: This article describes how you can configure your application to connect to Azure Database for MySQL with redirection.--++ Last updated 05/03/2023
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
Title: Restart server - Azure CLI - Azure Database for MySQL
description: This article describes how you can restart an Azure Database for MySQL server using the Azure CLI. --++ Last updated 06/20/2022
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for MySQL
description: This article describes how you can restart an Azure Database for MySQL server using the Azure portal. --++ Last updated 06/20/2022
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
Title: Restart server - Azure PowerShell - Azure Database for MySQL
description: This article describes how you can restart an Azure Database for MySQL server using PowerShell. --++ Last updated 06/20/2022
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-dropped-server.md
Title: Restore a deleted Azure Database for MySQL server
description: This article describes how to restore a deleted server in Azure Database for MySQL using the Azure portal. --++ Last updated 06/20/2022
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-cli.md
Title: Backup and restore - Azure CLI - Azure Database for MySQL
description: Learn how to backup and restore a server in Azure Database for MySQL by using the Azure CLI. --++ ms.devlang: azurecli Last updated 06/20/2022
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-portal.md
Title: Backup and restore - Azure portal - Azure Database for MySQL
description: This article describes how to restore a server in Azure Database for MySQL using the Azure portal. --++ Last updated 06/20/2022
mysql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md
Title: Backup and restore - Azure PowerShell - Azure Database for MySQL
description: Learn how to backup and restore a server in Azure Database for MySQL by using Azure PowerShell. --++ ms.devlang: azurepowershell Last updated 06/20/2022
mysql How To Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-server-parameters.md
Title: Configure server parameters - Azure portal - Azure Database for MySQL
description: This article describes how to configure MySQL server parameters in Azure Database for MySQL using the Azure portal. --++ Last updated 06/20/2022
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-connection-issues.md
description: Learn how to troubleshoot connection issues to Azure Database for M
keywords: mysql connection,connection string,connectivity issues,transient error,connection error --++ Last updated 06/20/2022
mysql How To Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-errors.md
Title: Troubleshoot common errors - Azure Database for MySQL description: Learn how to troubleshoot common migration errors encountered by users new to the Azure Database for MySQL service-++ - Last updated 06/20/2022
mysql How To Troubleshoot Connectivity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-connectivity-issues.md
Title: Troubleshoot connectivity issues in Azure Database for MySQL
description: Learn how to troubleshoot connectivity issues in Azure Database for MySQL. --++ Last updated 07/22/2022
mysql How To Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-high-cpu-utilization.md
Title: Troubleshoot high CPU utilization in Azure Database for MySQL
description: Learn how to troubleshoot high CPU utilization in Azure Database for MySQL. --++ Last updated 06/20/2022
mysql How To Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-low-memory-issues.md
Title: Troubleshoot low memory issues in Azure Database for MySQL
description: Learn how to troubleshoot low memory issues in Azure Database for MySQL. --++ Last updated 06/20/2022
mysql How To Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance-new.md
Title: Troubleshoot query performance in Azure Database for MySQL
description: Learn how to troubleshoot query performance in Azure Database for MySQL. --++ Last updated 06/20/2022
mysql How To Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance.md
Title: Profile query performance - Azure Database for MySQL
description: Learn how to profile query performance in Azure Database for MySQL by using EXPLAIN. --++ Last updated 06/20/2022
mysql How To Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-sys-schema.md
Title: Use the sys_schema - Azure Database for MySQL
description: Learn how to use the sys_schema to find performance issues and maintain databases in Azure Database for MySQL. --++ Last updated 06/20/2022
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/overview.md
Title: Overview - Azure Database for MySQL description: Learn about the Azure Database for MySQL service, a relational database service in the Microsoft cloud based on the MySQL Community Edition.-++ - Last updated 06/20/2022
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
Title: 'Quickstart: Create an Azure Database for MySQL - ARM template'
description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration, by using an Azure Resource Manager template. --++ Last updated 06/20/2022
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MySQL'
description: This quickstart describes how to use the Azure CLI to create an Azure Database for MySQL server in an Azure resource group. --++ ms.devlang: azurecli Last updated 06/20/2022
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md
Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for MySQ
description: This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group. --++ ms.devlang: azurepowershell Last updated 06/20/2022
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-server-up-azure-cli.md
Title: 'Quickstart: Create Azure Database for MySQL using az mysql up'
description: Quickstart guide to create Azure Database for MySQL server using Azure CLI (command line interface) up command. --++ ms.devlang: azurecli Last updated 06/20/2022
mysql Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/reference-stored-procedures.md
description: Learn which stored procedures in Azure Database for MySQL are usefu
--++ Last updated 06/20/2022
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-azure-cli.md
ms.devlang: azurecli--++ Last updated 06/20/2022
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md
Title: Java samples to illustrate connection pooling description: This article lists Java samples to illustrate connection pooling.--++
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
--++ Last updated 08/03/2023
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-overview.md
Title: Overview - Azure Database for MySQL single server description: Learn about the Azure Database for MySQL single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.-++ - Last updated 06/20/2022
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md
description: This tutorial explains how to create and manage Azure Database for
--++ ms.devlang: azurecli Last updated 06/20/2022
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-portal.md
description: This tutorial explains how to create and manage Azure Database for
--++ Last updated 06/20/2022
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md
description: This tutorial explains how to create and manage Azure Database for
--++ ms.devlang: azurepowershell Last updated 06/20/2022
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md
Title: 'Tutorial: Create Azure Database for MySQL - Azure Resource Manager templ
description: This tutorial explains how to provision and automate Azure Database for MySQL server deployments using Azure Resource Manager template. --++ Last updated 06/20/2022
mysql Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/videos.md
Title: Azure Database for MySQL Videos description: This page lists video content relevant for learning Azure Database for MySQL, MicrosoftΓÇÖs managed MySQL offering in Azure.--++
network-watcher Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-overview.md
Previously updated : 08/03/2023 Last updated : 08/10/2023
The **Alert** box on the right side of the page provides a view of all alerts ge
### Resource view
-The resource view helps you visualize how a resource is configured. The resource view is currently available for Azure Application Gateway, Azure Virtual WAN, and Azure Load Balancer. For example, to access the resource view of an application gateway, select the topology icon next to the application gateway name in the metrics grid view:
+The resource view helps you visualize how a resource is configured. For example, to access the resource view of an application gateway, select the topology icon next to the application gateway name in the metrics grid view:
:::image type="content" source="./media/network-insights-overview/access-resource-view.png" alt-text="Screenshot shows how to access the resource view of an application gateway in Azure Monitor network insights." lightbox="./media/network-insights-overview/access-resource-view.png":::
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
Previously updated : 08/08/2023 Last updated : 08/10/2023
Topology provides a visualization of the entire network for understanding networ
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- An account with the necessary [RBAC permissions](required-rbac-permissions.md) to utilize the Network watcher capabilities.
+- An account with the necessary [RBAC permissions](required-rbac-permissions.md) to utilize the Network Watcher capabilities.
## Supported resource types
The following are the resource types supported by topology:
To view a topology, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has the necessary [permissions](required-rbac-permissions.md).
-2. Select **More services**.
-3. In the **All services** screen, enter **Monitor** in the **Filter services** search box and select it from the search result.
-4. Under **Insights**, select **Networks**.
-5. In the **Networks** screen that appears, select **Topology**.
-6. Select **Scope** to define the scope of the Topology.
-7. In the **Select scope** pane, select the list of **Subscriptions**, **Resource groups**, and **Locations** of the resources for which you want to view the topology. Select **Save**.
+
+1. In the search box at the top of the portal, enter ***Monitor***. Select **Monitor** from the search results.
+
+1. Under **Insights**, select **Networks**.
+
+1. In **Networks**, select **Topology**.
+
+1. Select **Scope** to define the scope of the Topology.
+
+1. In the **Select scope** pane, select the list of **Subscriptions**, **Resource groups**, and **Locations** of the resources for which you want to view the topology. Select **Save**.
:::image type="content" source="./media/network-insights-topology/topology-scope-inline.png" alt-text="Screenshot of selecting the scope of the topology." lightbox="./media/network-insights-topology/topology-scope-expanded.png"::: The duration to render the topology may vary depending on the number of subscriptions selected.
-8. Select the [**Resource type**](#supported-resource-types) that you want to include in the topology and select **Apply**.
+
+1. Select the **Resource type** that you want to include in the topology and select **Apply**. See [supported resource types](#supported-resource-types).
The topology containing the resources according to the scope and resource type specified, appears.
Each edge of the topology represents an association between each of the resource
## Add regions
-You can add regions that aren't part of the existing topology. The number of regions that aren't part of the existing topology are displayed.
-To add a region, follow these steps:
+You can add regions that aren't part of the existing topology. The number of regions that aren't part of the existing topology are displayed. To add a region, follow these steps:
1. Hover on **Regions** under **Azure Regions**.
-2. From the list of **Hidden Resources**, select the regions to be added and select **Add to View**.
+
+2. From the list of **Hidden Resources**, select the regions that you want to add and select **Add to View**.
:::image type="content" source="./media/network-insights-topology/add-resources-inline.png" alt-text="Screenshot of the add resources and regions pane." lightbox="./media/network-insights-topology/add-resources-expanded.png":::
Drilling down into Azure resources such as Application Gateways and Firewalls di
## Integration with diagnostic tools
-When you drill down to a VM within the topology, the summary pane contains the **Insights + Diagnostics** section from where you can find the next hop.
+When you drill down to a VM within the topology, you can see details about the VM in the summary tab.
++
+Follow these steps to find the next hop:
+
+1. Select **Insights + Diagnostics** tab, and then select **Next Hop**.
+
+ :::image type="content" source="./media/network-insights-topology/resource-insights-diagnostics.png" alt-text="Screenshot of the Insights and Diagnostics tab of a virtual machine in the Topology page." lightbox="./media/network-insights-topology/resource-insights-diagnostics.png":::
- :::image type="content" source="./media/network-insights-topology/resource-summary-inline.png" alt-text="Screenshot of the summary and insights of each resource." lightbox="./media/network-insights-topology/resource-summary-expanded.png":::
+1. Enter the destination IP address and then select **Check Next Hop**.
-Follow these steps to find the next hop.
+ :::image type="content" source="./media/network-insights-topology/next-hop-check.png" alt-text="Screenshot of using Next hop check from withing the Insights and Diagnostics tab of a virtual machine in the Topology page." lightbox="./media/network-insights-topology/next-hop-check.png":::
-1. Click **Next hop** and enter the destination IP address.
-2. Select **Check Next Hop**. The [Next hop](network-watcher-next-hop-overview.md) checks if the destination IP address is reachable from the source VM.
+1. The Next hop capability of Network Watcher checks if the destination IP address is reachable from the source VM. The result shows the Next hop type and route table used to route traffic from the VM. For more information, see [Next hop](network-watcher-next-hop-overview.md).
- :::image type="content" source="./media/network-insights-topology/next-hop-inline.png" alt-text="Screenshot of the next hop option in the summary and insights tab." lightbox="./media/network-insights-topology/next-hop-expanded.png":::
+ :::image type="content" source="./media/network-insights-topology/next-hop-result.png" alt-text="Screenshot of the next hop option in the summary and insights tab." lightbox="./media/network-insights-topology/next-hop-result.png":::
## Next steps
-[Learn more](./connection-monitor-overview.md) about connectivity related metrics.
+To learn more about connectivity related metrics, see [Connection monitor](./connection-monitor-overview.md).
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md
The latency measurements are collected from Azure cloud regions worldwide, and c
The monthly Percentile P50 round trip times between Azure regions for a 30-day window are shown in the following tabs. The latency is measured in milliseconds (ms).
-The current dataset was taken on *July 21st, 2023*, and it covers the 30-day period from *June 21st, 2023* to *July 21st, 2023*.
+The current dataset was taken on *July 21, 2023*, and it covers the 30-day period from *June 21, 2023* to *July 21, 2023*.
+
+For readability, each table is split into tabs for groups of Azure regions. The tabs are organized by regions, and then by source region in the first column of each table. For example, the *East US* tab also shows the latency from all source regions to the two *East US* regions: *East US* and *East US 2*.
> [!IMPORTANT]
-> Monthly latency numbers across Azure regions do not change on a regular basis. You can expect an update of these tables every 6 to 9 months. Not all public Azure regions are listed in the tables below. When new regions come online, we will update this document as soon as latency data is available.
+> Monthly latency numbers across Azure regions do not change on a regular basis. You can expect an update of these tables every 6 to 9 months. Not all public Azure regions are listed in the following tables. When new regions come online, we will update this document as soon as latency data is available.
> > You can perform VM-to-VM latency between regions using [test Virtual Machines](../virtual-network/virtual-network-test-latency.md) in your Azure subscription. #### [North America / South America](#tab/Americas)
-Listing of Americas regions including US, Canada, and Brazil.
+Latency tables for Americas regions including US, Canada, and Brazil.
+
+Use the following tabs to view latency statistics for each region.
#### [Europe](#tab/Europe)
-Listing of European regions.
+Latency tables for European regions.
-#### [Asia / Pacific](#tab/AsiaPacific)
+Use the following tabs to view latency statistics for each region.
-Listing of Asia / Pacific regions including Japan, Korea, India, and Australia.
+#### [Australia / Asia / Pacific](#tab/APAC)
+Latency tables for Australia, Asia, and Pacific regions including and Australia, Japan, Korea, and India.
+
+Use the following tabs to view latency statistics for each region.
#### [Middle East / Africa](#tab/MiddleEast)
-Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
+Latency tables for Middle East / Africa regions including UAE, South Africa, and Qatar.
+
+Use the following tabs to view latency statistics for each region.
#### [West US](#tab/WestUS/Americas)
-|Source|West US|West US 2|West US 3|
+|Source region |West US|West US 2|West US 3|
||||| |Australia Central|144|164|158| |Australia Central 2|144|164|158|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [Central US](#tab/CentralUS/Americas)
-|Source|North Central US|Central US|South Central US|West Central US|
+|Source region|North Central US|Central US|South Central US|West Central US|
|||||| |Australia Central|193|180|175|167| |Australia Central 2|193|181|176|167|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [East US](#tab/EastUS/Americas)
-|Source|East US|East US 2|
+|Source region|East US|East US 2|
|||| |Australia Central|213|208| |Australia Central 2|213|209|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [Canada / Brazil](#tab/Canada/Americas)
-|Source|Brazil</br>South|Canada</br>Central|Canada</br>East|
+|Source region|Brazil</br>South|Canada</br>Central|Canada</br>East|
||||| |Australia Central|323|204|212| |Australia Central 2|323|204|212|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
|West US 2|182|64|73| |West US 3|162|66|73|
-#### [Australia](#tab/Australia/AsiaPacific)
+#### [Australia](#tab/Australia/APAC)
| Source | Australia</br>Central | Australia</br>Central 2 | Australia</br>East | Australia</br>Southeast | |--|-||-||
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
| West US 2 | 164 | 164 | 160 | 172 | | West US 3 | 158 | 158 | 156 | 167 |
-#### [Japan](#tab/Japan/AsiaPacific)
+#### [Japan](#tab/Japan/APAC)
-|Source|Japan East|Japan West|
+|Source region|Japan East|Japan West|
|||| |Australia Central|127|134| |Australia Central 2|127|135|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [Western Europe](#tab/WesternEurope/Europe)
-|Source|France Central|France South|West Europe|
+|Source region|France Central|France South|West Europe|
||||| |Australia Central|238|227|245| |Australia Central 2|238|227|245|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [Central Europe](#tab/CentralEurope/Europe)
-|Source|Germany North|Germany West Central|Switzerland North|Switzerland West|
+|Source region|Germany North|Germany West Central|Switzerland North|Switzerland West|
|||||| |Australia Central|248|242|237|234| |Australia Central 2|248|242|237|234|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [Norway / Sweden](#tab/NorwaySweden/Europe)
-|Source|Norway East|Norway West|Sweden Central|
+|Source region|Norway East|Norway West|Sweden Central|
||||| |Australia Central|262|258|265| |Australia Central 2|262|258|266|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [UK / North Europe](#tab/UKNorthEurope/Europe)
-|Source|UK South|UK West|North Europe|
+|Source region|UK South|UK West|North Europe|
||||| |Australia Central|243|245|251| |Australia Central 2|243|245|251|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
|West US 3|124|127|115|
-#### [Korea](#tab/Korea/AsiaPacific)
+#### [Korea](#tab/Korea/APAC)
-|Source|Korea Central|Korea South|
+|Source region|Korea Central|Korea South|
|||| |Australia Central|152|144| |Australia Central 2|152|144|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
|West US 3|135|124|
-#### [India](#tab/India/AsiaPacific)
+#### [India](#tab/India/APAC)
-|Source|Central India|West India|South India|
+|Source region|Central India|West India|South India|
||||| |Australia Central|145|145|126| |Australia Central 2|144|145|126|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
|West US 2|210|211|195| |West US 3|232|233|217|
-#### [Asia](#tab/Asia/AsiaPacific)
+#### [Asia](#tab/Asia/APAC)
-|Source|East Asia|Southeast Asia|
+|Source region|East Asia|Southeast Asia|
|||| |Australia Central|125|94| |Australia Central 2|125|94|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
#### [UAE / Qatar](#tab/uae-qatar/MiddleEast)
-|Source|Qatar Central|UAE Central|UAE North|
+|Source region|Qatar Central|UAE Central|UAE North|
||||| |Australia Central|191|170|170| |Australia Central 2|191|170|171|
Listing of Middle East / Africa regions including UAE, South Africa, and Qatar.
### [South Africa](#tab/southafrica/MiddleEast)
-|Source|South Africa North|South Africa West|
+|Source region|South Africa North|South Africa West|
|||| |Australia Central|384|399| |Australia Central 2|384|399|
notification-hubs Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/android-sdk.md
- Title: Send push notifications to Android using Azure Notification Hubs and Firebase SDK version 1.0.0-preview1 description: In this tutorial, you learn how to use Azure Notification Hubs and Google Firebase Cloud Messaging to send push notifications to Android devices (version 1.0.0-preview1).
This tutorial shows how to use Azure Notification Hubs and the updated version of the Firebase Cloud Messaging (FCM) SDK (version 1.0.0-preview1) to send push notifications to an Android application. In this tutorial, you create a blank Android app that receives push notifications using Firebase Cloud Messaging (FCM).
+> [!NOTE]
+> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md).
You can download the completed code for this tutorial from [GitHub](https://github.com/Azure/azure-notificationhubs-android/tree/v1-preview/notification-hubs-test-app-refresh).
The following is a list of some other tutorials for sending notifications:
- Azure Notification Hubs Java SDK: See [How to use Notification Hubs from Java](notification-hubs-java-push-notification-tutorial.md) for sending notifications from Java. This has been tested in Eclipse for Android Development. - PHP: [How to use Notification Hubs from PHP](notification-hubs-php-push-notification-tutorial.md).+
notification-hubs Configure Google Firebase Cloud Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/configure-google-firebase-cloud-messaging.md
ms.lastreviewed: 03/25/2019
This article shows you how to configure Google Firebase Cloud Messaging (FCM) settings for an Azure notification hub using the Azure portal.
+> [!NOTE]
+> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md).
## Prerequisites
The following procedure describes the steps to configure Google Firebase Cloud M
## Next steps For a tutorial with step-by-step instructions for sending notifications to Android devices by using Azure Notification Hubs and Google Firebase Cloud Messaging, see [Send push notifications to Android devices by using Notification Hubs and Google FCM](notification-hubs-android-push-notification-google-fcm-get-started.md).+
notification-hubs Configure Notification Hub Portal Pns Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/configure-notification-hub-portal-pns-settings.md
Azure Notification Hubs provides a push engine that's easy to use and that scale
In this quickstart, you'll use the platform notification system (PNS) settings in Notification Hubs to set up push notifications on multiple platforms. The quickstart shows you the steps to take in the Azure portal. [Google Firebase Cloud Messaging](?tabs=azure-cli#google-firebase-cloud-messaging-fcm) includes instructions for using the Azure CLI.
+> [!NOTE]
+> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md).
If you haven't already created a notification hub, create one now. For more information, see [Create an Azure notification hub in the Azure portal](create-notification-hub-portal.md) or [Create an Azure notification hub using the Azure CLI](create-notification-hub-azure-cli.md).
To learn more about how to push notifications to various platforms, see these tu
* [Send notifications to a UWP app running on a Windows device](notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) * [Send notifications to a Windows Phone 8 app by using MPNS](notification-hubs-windows-mobile-push-notifications-mpns.md) * [Send notifications by using Notification Hubs and Baidu cloud push](notification-hubs-baidu-china-android-notifications-get-started.md)+
notification-hubs Notification Hubs Gcm To Fcm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-gcm-to-fcm.md
ms.lastreviewed: 04/10/2019
[!INCLUDE [notification-hubs-firebase-deprecation](../../includes/notification-hubs-firebase-deprecation.md)]
-## Current state
-When Google announced its migration from Google Cloud Messaging (GCM) to Firebase Cloud Messaging (FCM), push services like ours had to adjust how we sent notifications to Android devices to accommodate the change.
-
-We updated our service backend, then published updates to our API and SDKs as needed. With our implementation, we made the decision to maintain compatibility with existing GCM notification schemas to minimize customer impact. This means that we currently send notifications to Android devices using FCM in FCM Legacy Mode. Ultimately, we want to add true support for FCM, including the new features and payload format. That is a longer-term change and the current migration is focused on maintaining compatibility with existing applications and SDKs. You can use either the GCM or FCM SDKs in your app (along with our SDK) and we make sure the notification is sent correctly.
-
-Some customers recently received an email from Google warning about apps using a GCM endpoint for notifications. This was just a warning, and nothing is broken ΓÇô your app's Android notifications are still sent to Google for processing and Google still processes them. Some customers who specified the GCM endpoint explicitly in their service configuration were still using the deprecated endpoint. We had already identified this gap and were working on fixing the issue when Google sent the email.
-
-We replaced that deprecated endpoint and the fix is deployed.
-
-## Going forward
-
-Google's FCM FAQ says you don't have to do anything. In the [FCM FAQ](https://developers.google.com/cloud-messaging/faq), Google said "client SDKs and GCM tokens will continue to work indefinitely. However, you won't be able to target the latest version of Google Play Services in your Android app unless you migrate to FCM."
-
-If your app uses the GCM library, go ahead and follow Google's instructions to upgrade to the FCM library in your app. Our SDK is compatible with either, so you won't have to update anything in your app on our side (as long as you're up to date with our SDK version).
-
-## Questions and answers
-
-Here's some answers to common questions we've heard from customers:
-
-**Q:** What do I need to do to be compatible by the cutoff date (Google's current cutoff date is May 29th and may change)?
-
-**A:** Nothing. We will maintain compatibility with existing GCM notification schema. Your GCM key will continue to work as normal as will any GCM SDKs and libraries used by your application.
-
-If/when you decide to upgrade to the FCM SDKs and libraries to take advantage of new features, your GCM key will still work. You may switch to using an FCM key if you wish, but ensure you are adding Firebase to your existing GCM project when creating the new Firebase project. This will guarantee backward compatibility with your customers that are running older versions of the app that still use GCM SDKs and libraries.
-
-If you are creating a new FCM project and not attaching to the existing GCM project, once you update Notification Hubs with the new FCM secret you will lose the ability to push notifications to your current app installations, since the new FCM key has no link to the old GCM project.
-
-**Q:** Why am I getting this email about old GCM endpoints being used? What do I have to do?
-
-**A:** Nothing. We have been migrating to the new endpoints and will be finished soon, so no change is necessary. Nothing is broken, our one missed endpoint simply caused warning messages from Google.
-
-**Q:** How can I transition to the new FCM SDKs and libraries without breaking existing users?
-
-A: Upgrade at any time. Google has not yet announced any deprecation of existing GCM SDKs and libraries. To ensure you don't break push notifications to your existing users, make sure when you create the new Firebase project you are associating with your existing GCM project. This will ensure new Firebase secrets will work for users running the older versions of your app with GCM SDKs and libraries, as well as new users of your app with FCM SDKs and libraries.
-
-**Q:** When can I use new FCM features and schemas for my notifications?
-
-**A:** Once we publish an update to our API and SDKs, stay tuned ΓÇô we expect to have something for you in the coming months.
notification-hubs Notification Hubs Push Notification Fixer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-fixer.md
If you inadvertently upload different types of certificates to the same hub, you
### FCM configuration
+> [!NOTE]
+> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md).
1. Ensure that the *server key* you obtained from Firebase matches the server key you registered in the Azure portal.
notification-hubs Notification Hubs Push Notification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-overview.md
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that e
- Notify users of enterprise events such as new messages and work items. - Send codes for multi-factor authentication.
+> [!NOTE]
+> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md).
## What are push notifications?
Notification Hubs is your ready-to-use push engine with the following advantages
- Scheduled push: You can schedule notifications to be sent anytime. - Direct push: You can skip registering devices with the Notification Hubs service and directly batch push to a list of device handles. - Personalized push: Device push variables help you send device-specific personalized push notifications with customized key-value pairs.-- **Rich telemetry**
- - General push, device, error, and operation telemetry are available both in the Azure portal and programmatically.
- - Per-message telemetry tracks each push from your initial request call to the Notification Hubs service successfully sending the pushes.
- - Platform Notification System feedback communicates all feedback from PNSes to assist in debugging.
- **Scalability** - Send fast messages to millions of devices without re-architecting or device sharding. - **Security**
Notification Hubs is your ready-to-use push engine with the following advantages
Get started with creating and using a notification hub by following the [Tutorial: Push notifications to mobile applications](notification-hubs-android-push-notification-google-fcm-get-started.md). [0]: ./media/notification-hubs-overview/registration-diagram.png+ [1]: ./media/notification-hubs-overview/notification-hub-diagram.png [How customers are using Notification Hubs]: https://azure.microsoft.com/services/notification-hubs+ [Notification Hubs tutorials and guides]: ./index.yml+ [iOS]: ./notification-hubs-push-notification-fixer.md+ [Android]: ./notification-hubs-android-push-notification-google-gcm-get-started.md+ [Windows Universal]: ./notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md+ [Windows Phone]: ./notification-hubs-windows-mobile-push-notifications-mpns.md+ [Kindle]: ./notification-hubs-android-push-notification-google-fcm-get-started.md+ [Xamarin.iOS]: ./xamarin-notification-hubs-ios-push-notification-apns-get-started.md+ [Xamarin.Android]: ./xamarin-notification-hubs-push-notifications-android-gcm.md+ [Microsoft.WindowsAzure.Messaging.NotificationHub]: /previous-versions/azure/reference/dn339221(v=azure.100)+ [Microsoft.ServiceBus.Notifications]: /previous-versions/azure/+ [App Service Mobile Apps]: /previous-versions/azure/app-service-mobile/app-service-mobile-value-prop+ [templates]: notification-hubs-templates-cross-platform-push-messages.md+ [Azure portal]: https://portal.azure.com
-[tags]: (https://msdn.microsoft.com/library/azure/dn530749.aspx)
+
+[tags]: (https://msdn.microsoft.com/library/azure/dn530749.aspx)
+
operator-nexus Concepts Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-compute.md
Title: "Azure Operator Nexus: Compute"
-description: Overview of compute resources for Azure Operator Nexus.
+ Title: Azure Operator Nexus compute
+description: Get an overview of compute resources for Azure Operator Nexus.
# Azure Operator Nexus compute
-Azure Operator Nexus is built on some basic constructs like compute servers, storage appliance, and network fabric devices. These compute servers, also referred to as BareMetal Machines (BMMs), represent the physical machines in the rack. They run the CBL-Mariner operating system and provide closed integration support for high-performance workloads.
+Azure Operator Nexus is built on basic constructs like compute servers, storage appliances, and network fabric devices. These compute servers, also called bare-metal machines (BMMs), represent the physical machines on the rack. They run the CBL-Mariner operating system and provide closed integration support for high-performance workloads.
-These BareMetal Machine gets deployed as part of the Azure Operator Nexus automation suite and live as nodes in a Kubernetes cluster to serve various virtualized and containerized workloads in the ecosystem.
+These BMMs are deployed as part of the Azure Operator Nexus automation suite. They exist as nodes in a Kubernetes cluster to serve various virtualized and containerized workloads in the ecosystem.
-Each BareMetal Machine within an Azure Operator Nexus instance is represented as an Azure resource and Operators (end users) get access to perform various operations to manage its lifecycle like any other Azure resource.
+Each BMM in an Azure Operator Nexus instance is represented as an Azure resource. Operators get access to perform various operations to manage the BMM's lifecycle like any other Azure resource.
-## Key capabilities offered in Azure Operator Nexus compute
+## Key capabilities of Azure Operator Nexus compute
-- **NUMA Alignment** Nonuniform memory access (NUMA) alignment is a technique to optimize performance and resource utilization in multi-socket servers. It involves aligning memory and compute resources to reduce latency and improve data access within a server system. The strategic placement of software components and workloads in a NUMA-aware manner, Operators can enhance the performance of network functions, such as virtualized routers and firewalls. This placement leads to improved service delivery and responsiveness in their Telco cloud environments. By default, all the workloads deployed in an Azure Operator Nexus instance are NUMA-aligned.-- **CPU Pinning** CPU pinning is a technique to allocate specific CPU cores to dedicated tasks or workloads, ensuring consistent performance and resource isolation. Pinning critical network functions or real-time applications to specific CPU cores allows Operators to minimize latency and improve predictability in their infrastructure. This approach is useful in scenarios where strict quality-of-service requirements exist, ensuring that these tasks receive dedicated processing power for optimal performance. All of the virtual machines created for Virtual Network Function (VNF) or Containerized Network Function (CNF) workloads on Nexus compute are pinned to specific virtual cores. This pinning provides better performance and avoids CPU stealing.-- **CPU Isolation** CPU isolation provides a clear separation between the CPUs allocated for workloads from the CPUs allocated for control plane and platform activities. CPU isolation prevents interference and limits the performance predictability for critical workloads. By isolating CPU cores or groups of cores, we can mitigate the effect of noisy neighbors. It guarantees the required processing power for latency-sensitive applications. Azure Operator Nexus reserves a small set of CPUs for the host operating system and other platform applications. The remaining CPUs are available for running actual workloads.-- **Huge Page Support** Huge page usage in Telco workloads refers to the utilization of large memory pages, typically 2 MB or 1 GB in size, instead of the standard 4-KB pages. This approach helps reduce memory overhead and improves the overall system performance. It reduces the translation look-aside buffer (TLB) miss rate and improves memory access efficiency. Telco workloads that involve large data sets or intensive memory operations, such as network packet processing can benefit from huge page usage as it enhances memory performance and reduces memory-related bottlenecks. As a result, users see improved throughput and reduced latency. All virtual machines created on Azure Operator Nexus can make use of either 2 MB or 1-GB huge pages depending on the flavor of the virtual machine.-- **Dual Stack Support** Dual stack support refers to the ability of networking equipment and protocols to simultaneously handle both IPv4 and IPv6 traffic. With the depletion of available IPv4 addresses and the growing adoption of IPv6, dual stack support is crucial for seamless transition and coexistence between the two protocols. Telco operators utilize dual stack support to ensure compatibility, interoperability, and future-proofing of their networks, allowing them to accommodate both IPv4 and IPv6 devices and services while gradually transitioning towards full IPv6 deployment. Dual stack support ensures uninterrupted connectivity and smooth service delivery to customers regardless of their network addressing protocols. Azure Operator Nexus provides support for both IPV4 and IPV6 configuration across all layers of the stack.-- **Network Interface Cards** Computes in Azure Operator Nexus are designed to meet the requirements for running critical applications that are Telco-grade and can perform fast and efficient data transfer between servers and networks. Workloads can make use of SR-IOV (Single Root I/O Virtualization) that enables the direct assignment of physical I/O resources, such as network interfaces, to virtual machines. This direct assignment bypasses the hypervisor's virtual switch layer. This direct hardware access improves network throughput, reduces latency, and enables more efficient utilization of resources. It makes it an ideal choice for Operators running virtualized and containerized network functions.
+### NUMA alignment
-## BareMetal machine status
+Nonuniform memory access (NUMA) alignment is a technique to optimize performance and resource utilization in multiple-socket servers. It involves aligning memory and compute resources to reduce latency and improve data access within a server system.
-There are multiple properties, which reflects the operational state of BareMetal Machines. Some of these include:
-- Power state-- Ready state-- Cordon status-- Detailed status
+Through the strategic placement of software components and workloads in a NUMA-aware way, Operators can enhance the performance of network functions, such as virtualized routers and firewalls. This placement leads to improved service delivery and responsiveness in their telco cloud environments.
-_`Power state`_ field indicates the state as derived from BareMetal Controller (BMC). The state can be either 'On' or 'Off'.
+By default, all the workloads deployed in an Azure Operator Nexus instance are NUMA aligned.
-The _`Ready State`_ field provides an overall assessment of the BareMetal Machine readiness. It looks at a combination of Detailed Status, Power State and provisioning state of the resource to determine whether the BareMetal Machine is ready or not. When _Ready State_ is 'True', the BareMetal Machine is powered on, the _Detailed Status_ is 'Provisioned' and the node representing the BareMetal Machine has successfully joined the Undercloud Kubernetes cluster. If any of those conditions aren't met, the _Ready State_ is set to 'False'.
+### CPU pinning
-The _`Cordon State`_ reflects the ability to run any workloads on machine. Valid values include 'Cordoned' and 'Uncordoned'. "Cordoned' seizes creation of any new workloads on the machine, whereas "Uncordoned' ensures that workloads can now run on this BareMetal Machine.
+CPU pinning is a technique to allocate specific CPU cores to dedicated tasks or workloads, which helps ensure consistent performance and resource isolation. Pinning critical network functions or real-time applications to specific CPU cores allows operators to minimize latency and improve predictability in their infrastructure. This approach is useful in scenarios where strict quality-of-service requirements exist, because these tasks can receive dedicated processing power for optimal performance.
-The BareMetal Machine _`Detailed Status`_ field reflects the current status of the machine.
+All of the virtual machines created for virtual network function (VNF) or containerized network function (CNF) workloads on Azure Operator Nexus compute are pinned to specific virtual cores. This pinning provides better performance and avoids CPU stealing.
-- Preparing - Preparing for provisioning of the machine-- Provisioning - Provisioning in progress-- **Provisioned** - The OS is provisioned to the machine-- **Available** - Available to participate in the cluster-- **Error** - Unable to provision the machine
+### CPU isolation
-Bold indicates an end state status.
-_Preparing_ and _Provisioning_ are transitory states. _Available_ indicates the machine has successfully provisioned but is currently powered off.
+CPU isolation provides a clear separation between the CPUs allocated for workloads and the CPUs allocated for control plane and platform activities. CPU isolation prevents interference and limits the performance predictability for critical workloads. By isolating CPU cores or groups of cores, operators can mitigate the effect of noisy neighbors. It helps guarantee the required processing power for latency-sensitive applications.
+Azure Operator Nexus reserves a small set of CPUs for the host operating system and other platform applications. The remaining CPUs are available for running actual workloads.
-## BareMetal machine operations
+### Huge page support
-- **Update/Patch BareMetal Machine** Update the bare metal machine resource properties.-- **List/Show BareMetal Machine** Retrieve bare metal machine information.-- **Reimage BareMetal Machine** Reprovision a bare metal machine matching the image version being used across the Cluster.-- **Replace BareMetal Machine** Replace a bare metal machine as part of an effort to service the machine.-- **Restart BareMetal Machine** Reboots a bare metal machine.-- **Power Off BareMetal Machine** Power off a bare metal machine.-- **Start BareMetal Machine** Power on a bare metal machine.-- **Cordon BareMetal Machine** Prevents scheduling of workloads on the specified bare metal machine's Kubernetes node. Optionally allows for evacuation of the workloads from the node.-- **Uncordon BareMetal Machine** Allows scheduling of workloads on the specified bare metal machine's Kubernetes node.-- **BareMetalMachine Validate** Triggers hardware validation of a bare metal machine.-- **BareMetalMachine Run** Allows the customer to run a script specified directly in the input on the targeted bare metal machine.-- **BareMetalMachine Run Data Extract** Allows the customer to run one or more data extractions against a bare metal machine.-- **BareMetalMachine Run Read-only** Allows the customer to run one or more read-only commands against a bare metal machine.
+Huge page usage in telco workloads refers to the utilization of large memory pages, typically 2 MB or 1 GB in size, instead of the standard 4-KB pages. This approach helps reduce memory overhead and improves the overall system performance. It reduces the translation look-aside buffer (TLB) miss rate and improves memory access efficiency.
+
+Telco workloads that involve large data sets or intensive memory operations, such as network packet processing, can benefit from huge page usage because it enhances memory performance and reduces memory-related bottlenecks. As a result, users see improved throughput and reduced latency.
+
+All virtual machines created on Azure Operator Nexus can make use of either 2-MB or 1-GB huge pages, depending on the type of virtual machine.
+
+### Dual-stack support
+
+Dual-stack support refers to the ability of networking equipment and protocols to simultaneously handle both IPv4 and IPv6 traffic. With the depletion of available IPv4 addresses and the growing adoption of IPv6, dual-stack support is crucial for seamless transition and coexistence between the two protocols.
+
+Telco operators use dual-stack support to ensure compatibility, interoperability, and future-proofing of their networks. It allows them to accommodate both IPv4 and IPv6 devices and services while gradually transitioning toward full IPv6 deployment.
+
+Dual-stack support helps ensure uninterrupted connectivity and smooth service delivery to customers regardless of their network addressing protocols. Azure Operator Nexus provides support for both IPv4 and IPv6 configuration across all layers of the stack.
+
+### Network interface cards
+
+Computes in Azure Operator Nexus are designed to meet the requirements for running critical applications that are telco grade. They can perform fast and efficient data transfer between servers and networks.
+
+Workloads can make use of single-root I/O virtualization (SR-IOV). SR-IOV enables the direct assignment of physical I/O resources, such as network interfaces, to virtual machines. This direct assignment bypasses the hypervisor's virtual switch layer.
+
+This direct hardware access improves network throughput, reduces latency, and enables more efficient utilization of resources. It makes SR-IOV an ideal choice for operators running virtualized and containerized network functions.
+
+## BMM status
+
+The following properties reflect the operational state of a BMM:
+
+- `Power State` indicates the state as derived from a bare-metal controller (BMC). The state can be either `On` or `Off`.
+
+- `Ready State` provides an overall assessment of BMM readiness. It looks at a combination of `Detailed Status`, `Power State`, and the provisioning state of the resource to determine whether the BMM is ready or not. When `Ready State` is `True`, the BMM is turned on, `Detailed Status` is `Provisioned`, and the node that represents the BMM has successfully joined the undercloud Kubernetes cluster. If any of those conditions aren't met, `Ready State` is set to `False`.
+
+- `Cordon State` reflects the ability to run any workloads on a machine. Valid values are `Cordoned` and `Uncordoned`. `Cordoned` seizes creation of any new workloads on the machine. `Uncordoned` ensures that workloads can now run on this BMM.
+
+- `Detailed Status` reflects the current status of the machine:
+
+ - `Preparing`: The machine is being prepared for provisioning.
+ - `Provisioning`: Provisioning is in progress.
+ - `Provisioned`: The operating system is provisioned to the machine.
+ - `Available`: The machine is available to participate in the cluster. The machine was successfully provisioned but is currently turned off.
+ - `Error`: The machine couldn't be provisioned.
+
+ `Preparing` and `Provisioning` are transitory states. `Provisioned`, `Available`, and `Error` are end-state statuses.
+
+## BMM operations
+
+- **Update/Patch BareMetal Machine**: Update the BMM resource properties.
+- **List/Show BareMetal Machine**: Retrieve BMM information.
+- **Reimage BareMetal Machine**: Reprovision a BMM that matches the image version that's used across the cluster.
+- **Replace BareMetal Machine**: Replace a BMM as part of an effort to service the machine.
+- **Restart BareMetal Machine**: Restart a BMM.
+- **Power Off BareMetal Machine**: Turn off a BMM.
+- **Start BareMetal Machine**: Turn on a BMM.
+- **Cordon BareMetal Machine**: Prevent scheduling of workloads on the specified BMM's Kubernetes node. Optionally, allow for evacuation of the workloads from the node.
+- **Uncordon BareMetal Machine**: Allow scheduling of workloads on the specified BMM's Kubernetes node.
+- **BareMetalMachine Validate**: Trigger hardware validation of a BMM.
+- **BareMetalMachine Run**: Allow the customer to run a script specified directly in the input on the targeted BMM.
+- **BareMetalMachine Run Data Extract**: Allow the customer to run one or more data extractions against a BMM.
+- **BareMetalMachine Run Read-only**: Allow the customer to run one or more read-only commands against a BMM.
> [!NOTE]
-> * Customers cannot explicitly create or delete BareMetal Machines directly. These machines are only created as the realization of the Cluster lifecycle. Implementation will block any creation or delete requests from any user, and only allow internal/application driven creates or deletes.
+> Customers can't create or delete BMMs directly. These machines are created only as the realization of the cluster lifecycle. Implementation blocks creation or deletion requests from any user, and it allows only internal/application-driven creation or deletion operations.
-## Form-factor specific information
+## Form-factor-specific information
-Azure Operator Nexus offers a group of on-premises cloud solutions catering to both [Near Edge](reference-near-edge-compute.md) and Far-Edge environments. For more information about the compute offerings and the respective configurations, see the following reference links for more details.
+Azure Operator Nexus offers a group of on-premises cloud solutions that cater to both [near-edge](reference-near-edge-compute.md) and far-edge environments.
operator-nexus Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-storage.md
Title: Azure Operator Nexus storage appliance
-description: Overview of storage appliance resources for Azure Operator Nexus.
+description: Get an overview of storage appliance resources for Azure Operator Nexus.
# Azure Operator Nexus storage appliance
-Operator Nexus is built on some basic constructs like compute servers, storage appliance, and network fabric devices. These storage appliances, also referred to as Nexus storage appliances, represent the persistent storage appliance in the rack. In each Nexus storage appliance, there are multiple storage devices, which are aggregated to provide a single storage pool. This storage pool is then carved out into multiple volumes, which are then presented to the compute servers as block storage devices. The compute servers can then use these block storage devices as persistent storage for their workloads. Each Nexus cluster is provisioned with a single storage appliance that is shared across all the tenant workloads.
+Azure Operator Nexus is built on basic constructs like compute servers, storage appliances, and network fabric devices. Azure Operator Nexus storage appliances represent persistent storage appliances on the rack.
-The storage appliance within an Operator Nexus instance is represented as an Azure resource and operators (end users) get access to view its attributes like any other Azure resource.
+Each storage appliance contains multiple storage devices, which are aggregated to provide a single storage pool. This storage pool is then carved out into multiple volumes, which are presented to the compute servers as block storage devices. The compute servers can use these block storage devices as persistent storage for their workloads. Each Azure Operator Nexus cluster is provisioned with a single storage appliance that's shared across all the tenant workloads.
-## Key capabilities offered in Azure Operator Nexus Storage software stack
+The storage appliance in an Azure Operator Nexus instance is represented as an Azure resource. Operators get access to view its attributes like any other Azure resource.
## Kubernetes storage classes
-The Nexus Software Kubernetes stack offers two types of storage, selectable using the Kubernetes StorageClass mechanism.
+The Azure Operator Nexus software Kubernetes stack offers two types of storage. Operators select them through the Kubernetes StorageClass mechanism.
-#### **StorageClass: ΓÇ£nexus-volumeΓÇ¥**
+### StorageClass: nexus-volume
-The default storage mechanism, known as "nexus-volume," is the preferred choice for most users. It provides the highest levels of performance and availability. However, it's important to note that volumes can't be simultaneously shared across multiple worker nodes. These volumes can be accessed and managed using the Azure API and Portal through the Volume Resource.
+The default storage mechanism, *nexus-volume*, is the preferred choice for most users. It provides the highest levels of performance and availability. However, volumes can't be simultaneously shared across multiple worker nodes. Operators can access and manage these volumes by using the Azure API and portal, through the volume resource.
-#### **StorageClass: ΓÇ£nexus-sharedΓÇ¥**
+### StorageClass: nexus-shared
-In situations where a "shared filesystem" is required, the "nexus-shared" storage class is available. This storage class enables multiple pods to concurrently access and share the same volume, providing a shared storage solution. While the performance and availability of "nexus-shared" are sufficient for most applications, it's recommended that workloads with heavy IO (input/output) requirements utilize the "nexus-volume" option mentioned earlier for optimal performance.
+In situations where a shared file system is required, the *nexus-shared* storage class is available. This storage class provides a shared storage solution by enabling multiple pods to concurrently access and share the same volume.
-## Storage appliance status
+Although the performance and availability of *nexus-shared* are sufficient for most applications, we recommend that workloads with heavy I/O requirements use the *nexus-volume* option for optimal performance.
-There are multiple properties, which reflect the operational state of storage appliance. Some of these include:
+## Storage appliance status
-- Status-- Provisioning state-- Capacity total / used-- Remote Vendor Management
+The following properties reflect the operational state of a storage appliance:
-_`Status`_ field indicates the state as derived from the storage appliance. The state can be Available, Error or Provisioning.
+- `Status` indicates the state as derived from the storage appliance. The state can be `Available`, `Error`, or `Provisioning`.
-The _`Provisioning State`_ field provides the current provisioning state of the storage appliance. The provisioning state can be Succeeded, Failed, or InProgress.
+- `Provisioning State` provides the current provisioning state of the storage appliance. The provisioning state can be `Succeeded`, `Failed`, or `InProgress`.
-The _`Capacity`_ field provides the total and used capacity of the storage appliance.
+- `Capacity` provides the total and used capacity of the storage appliance.
-The _`Remote Vendor Management`_ field indicates whether the remote vendor management is enabled or disabled for the storage appliance.
+- `Remote Vendor Management` indicates whether remote vendor management is enabled or disabled for the storage appliance.
## Storage appliance operations-- **List Storage Appliances** List storage appliances in the provided resource group or subscription.-- **Show Storage Appliance** Get properties of the provided storage appliance.-- **Update Storage Appliance** Update properties or provided tags of the provided storage appliance.-- **Enable/Disable Remote Vendor Management for Storage Appliance** Enable/Disable remote vendor management for the provided storage appliance.+
+- **List Storage Appliances**: List storage appliances in the provided resource group or subscription.
+- **Show Storage Appliance**: Get properties of the provided storage appliance.
+- **Update Storage Appliance**: Update properties or tags of the provided storage appliance.
+- **Enable/Disable Remote Vendor Management for Storage Appliance**: Enable or disable remote vendor management for the provided storage appliance.
> [!NOTE]
-> Customers cannot explicitly create or delete storage appliances directly. These resources are only created as the realization of the Cluster lifecycle. Implementation will block any creation or delete requests from any user, and only allow internal/application driven creates or deletes.
+> Customers can't create or delete storage appliances directly. These resources are created only as the realization of the cluster lifecycle. Implementation blocks creation or deletion requests from any user, and it allows only internal/application-driven creation or deletion operations.
operator-nexus Howto Monitor Naks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-naks-cluster.md
Container Insights provides end-users functionality to fine-tune the collection
## Extra resources -- Review [workbooks documentation](../azure-monitor/visualize/workbooks-overview.md) and then you may use Operator Nexus telemetry [sample Operator Nexus workbooks](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services).-- Review [Azure Monitor Alerts](../azure-monitor/alerts/alerts-overview.md), how to create [Azure Monitor Alert rules](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric), and use [sample Operator Nexus Alert templates](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services).
+- Review [workbooks documentation](../azure-monitor/visualize/workbooks-overview.md) and then you may use Operator Nexus telemetry sample Operator Nexus workbooks.
+- Review [Azure Monitor Alerts](../azure-monitor/alerts/alerts-overview.md), how to create [Azure Monitor Alert rules](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric), and use sample Operator Nexus Alert templates.
operator-nexus Howto Run Instance Readiness Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md
Instance Readiness Testing (IRT) is a framework built to orchestrate real-world
- A Linux environment (Ubuntu suggested) capable of calling Azure APIs - Knowledge of networks to use for the test * Networks to use for the test are specified in a "networks-blueprint.yml" file, see [Input Configuration](#input-configuration).-- curl or wget to download IRT package-
-## Before execution
-
-1. From your Linux environment, download nexus-irt.tar.gz from aka.ms/nexus-irt `curl -Lo nexus-irt.tar.gz aka.ms/nexus-irt`.
-1. Extract the tarball to the local file system: `mkdir -p irt && tar xf nexus-irt.tar.gz --directory ./irt`.
-1. Switch to the new directory `cd irt`.
-1. The `setup.sh` script is provided to aid in the initial set up of an environment.
- * `setup.sh` assumes a nonroot user and attempts to use `sudo`, which installs:
- 1. `jq` version 1.6
- 1. `yq` version 4.33
- 1. `azcopy` version 10
- 1. `az` Azure CLI minimum version not known, stay up to date.
- 1. `elinks` for viewing html files on the command line
- 1. `tree` for viewing directory structures
- 1. `moreutils` utilities for viewing progress from the ACI container
-1. [Optional] Set up a storage account to archive test results over time. For help, see the [instructions](#uploading-results-to-your-own-archive).
-1. Log into Azure, if not already logged in: `az login --use-device`.
- * User should have `Contributor` role
-1. Create an Azure Managed Identity for the container to use.
- * Using the provided script: `MI_RESOURCE_GROUP="<your resource group> MI_NAME="<managed identity name>" SUBSCRIPTION="<subscription>" ./create-managed-identity.sh`
- * Can be created manually via the Azure portal, refer to the script for needed permissions
-1. Create a service principal and security group. The service principal is used as the executor of the test. The group informs the kubernetes cluster of valid users. The service principal must be a part of the security group, so it has the ability to log into the cluster.
- * You can provide your own, or use our provided script, here's an example of how it could be executed; `AAD_GROUP_NAME=external-test-aad-group-8 SERVICE_PRINCIPAL_NAME=external-test-sp-8 ./irt/create-service-principal.sh`.
- * This script prints four key/value pairs for you to include in your input file.
-1. If necessary, create the isolation domains required to execute the tests. They aren't lifecycled as part of this test scenario.
- * **Note:** If deploying isolation domains, your network blueprint must define at least one external network per isolation domain. see `networks-blueprint.example.yml` for help with configuring your network blueprint.
- * `create-l3-isolation-domains.sh` takes one parameter, a path to your networks blueprint file; here's an example of the script being invoked:
- * `create-l3-isolation-domains.sh ./networks-blueprint.yml`
-
-### Input configuration
-
-1. Build your input file. The IRT tarball provides `irt-input.example.yml` as an example. These values **will not work for all instances**, they need to be manually changed and the file also needs to be renamed to `irt-input.yml`.
-1. define the values of networks-blueprint input, an example of this file is given in networks-blueprint.example.yml.
-
-The network blueprint input schema for IRT is defined in the networks-blueprint.example.yml. Currently IRT has the following network requirements. The networks are created as part of the test, provide network details that aren't in use.
-
-1. Three (3) L3 Networks
+- curl to download IRT package
+- The User Access Admin & Contributor roles for the execution subscription
+- The ability to create security groups in your Active Directory tenant
+## Input configuration
+
+Build your input file. The IRT tarball provides `irt-input.example.yml` as an example, follow the [instructions](#download-irt) to download the tarball. These values **will not work for your instances**, they need to be manually changed and the file should also be renamed to `irt-input.yml`. The example input file is provided as a stub to aid in configuring new input files. Overridable values and their usage are outlined in the example. The **[One Time Setup](#one-time-setup) assists in setting input values by writing key/value pairs to the config file as they execute.**
+
+The network information is provided in either a `networks-blueprint.yml` file, similar to the `networks-blueprint.example.yml` that is provided, or appended to the `irt-input.yml` file. The schema for IRT is defined in the `networks-blueprint.example.yml`. The networks are created as part of the test, provide network details that aren't in use. Currently IRT has the following network requirements:
+
+* Three (3) L3 Networks
* Two of them with MTU 1500
- * One of them with MTU 9000 and shouldn't have fabric_asn definition
+ * One of them with MTU 9000 and shouldn't have a fabric_asn attribute
+* One (1) Trunked Network
+* All vlans should be greater than 500
+
+## One Time Setup
+
+### Download IRT
+IRT is distributed via tarball, download it, extract it, and navigate to the `irt` directory
+1. From your Linux environment, download nexus-irt.tar.gz from aka.ms/nexus-irt `curl -Lo nexus-irt.tar.gz aka.ms/nexus-irt`
+1. Extract the tarball to the local file system: `mkdir -p irt && tar xf nexus-irt.tar.gz --directory ./irt`
+1. Switch to the new directory `cd irt`
++
+### Install dependencies
+There are multiple dependencies expected to be available during execution. Review this list;
+
+* `jq` version 1.6 or greater
+* `yq` version 4.33 or greater
+* `azcopy` version 10 or greater
+* `az` Azure CLI minimum version not known, stay up to date.
+* `elinks` - for viewing html files on the command line
+* `tree` - for viewing directory structures
+* `moreutils` - for viewing progress from the ACI container
+
+The `setup.sh` script is provided to aid with installing the listed dependencies. It installs any dependencies that aren't available in PATH. It doesn't upgrade any dependencies that don't meet the minimum required versions.
+
+> [!NOTE]
+> `setup.sh` assumes a nonroot user and attempts to use `sudo`
+
+### All in one setup
+
+`all-in-one-setup.sh` is provided to create all of the Azure resources required to run IRT. This process includes creating a managed identity, a service principal, a security group, isolation domains, and a storage account to archive the test results. These resources can be created during the all in one script, or they can be created step by step per the instructions in this document. Each of the script, individually and via the all in one script, writes updates to your `irt-input.yml` file with the key value pairs needed to utilize the resources you created. Review the `irt-input.example.yml` file for the required inputs needed for the script(s), regardless of the methodology you pursue. All of the scripts are idempotent, and also allow you to use existing resources if desired.
+
+### Step-by-Step setup
+
+> [!NOTE]
+> Only use this section if you're NOT using `all-in-one.sh`
+
+If your workflow is incompatible with `all-in-one.sh`, each resource needed for IRT can be created manually with each supplemental script. Like `all-in-one.sh`, running these scripts writes key/value pairs to your `irt-input.yml` for you to use during your run. These five scripts make up the `all-in-one.sh`.
+
+IRT makes commands against your resources, and needs permission to do so. IRT requires a Managed Identity and a Service Principal to execute. It also requires that the service principal is a member of the Azure AD Security Group that is also provided as input.
+
+#### Create managed identity
+A managed identity with the following role assignments is needed to execute tests. The supplemental script, `create-managed-identity.sh` creates a managed identity with these role assignments.
+ * `Contributor` - For creating and manipulating resources
+ * `Storage Blob Data Contributor` - For reading from and writing to the storage blob container
+ * `Log Analytics Reader` - For reading metadata about the LAW
++
+Executing `create-managed-identity.sh` requires the following environment variables to be set;
+ * **MI_RESOURCE_GROUP** - The resource group the Managed Identity is created in. The resource group is created in `eastus` if the resource group provided doesn't yet exist.
+ * **MI_NAME** - The name of the Managed Identity to be created.
+ * **[Optional] SUBSCRIPTION** - to set the subscription. Alternatively, the script uses az CLI context to look up the subscription.
+
+```bash
+# Example execution of the script
+MI_RESOURCE_GROUP="<your resource group>" MI_NAME="<your managed identity name>" SUBSCRIPTION="<your subscription ID>" ./create-managed-identity.sh
+```
+
+**RESULT:** This script prints a value for `MANAGED_IDENTITY_ID`. This key/value pair should be recorded in the irt-input.yml for use. See [Input Configuration](#input-configuration).
++
+#### Create service principal and security group
+A service principal with the following role assignments. The supplemental script, `create-service-principal.sh` creates a service principal with these role assignments, or add role assignments to an existing service principal.
+ * `Contributor` - For creating and manipulating resources
+ * `Storage Blob Data Contributor` - For reading from and writing to the storage blob container
+ * `Azure ARC Kubernetes Admin` - For ARC enrolling the NAKS cluster
+
+Additionally, the script creates the necessary security group, and adds the service principal to the security group. If the security group exists, it adds the service principal to the existing security group.
+
+Executing `create-service-principal.sh` requires the following environment variables to be set:
+ * SERVICE_PRINCIPAL_NAME - The name of the service principal, created with the `az ad sp create-for-rbac` command.
+ * AAD_GROUP_NAME - The name of the security group.
+
+```bash
+# Example execution of the script
+SERVICE_PRINCIPAL_NAME="<your service principal name>" AAD_GROUP_NAME="<your security group name>" ./create-service-principal.sh
+```
+
+**RESULT:** This script prints values for `AAD_GROUP_ID`, `SP_ID`, `SP_PASSWORD`, and `SP_TENANT`. This key/value pair should be recorded in irt-input.yml for use. See [Input Configuration](#input-configuration).
++
+#### Create isolation domains
+The testing framework doesn't create, destroy, or manipulate isolation domains. Therefore, existing Isolation Domains can be used. Each Isolation Domain requires at least one external network. The supplemental script, `create-l3-isolation-domains.sh`. Internal networks are created, manipulated, and destroy through the course of testing. They're created using the data provided in the networks blueprint.
+
+Executing `create-l3-isolation-domains.sh` requires one **parameter**, a path to your networks blueprint file:
+
+```bash
+# Example of the script being invoked:
+./create-l3-isolation-domains.sh ./networks-blueprint.yml
+```
+
+#### Create archive storage
+IRT creates an html test report after running a test scenario. These reports can optionally be uploaded to a blob storage container. the supplementary script `create-archive-storage.sh` to create a storage container, storage account, and resource group if they don't already exist.
++
+Executing `create-managed-identity.sh` requires the following environment variables to be set:
+ * **RESOURCE_GROUP** - The resource group the Managed Identity is created in. The resource group is created in `eastus` if the resource group provided doesn't yet exist.
+ * **STORAGE_ACCOUNT_NAME** - The name of the Azure storage account to be created.
+ * **STORAGE_CONTAINER_NAME** - The name of the blob storage container to be created.
+ * **[Optional] SUBSCRIPTION** - to set the subscription. Alternatively, the script uses the az CLI context to look up the subscription.
++
+```bash
+# Example execution of the script
+RESOURCE_GROUP="<your resource group>" STORAGE_ACCOUNT_NAME="<your storage account name>" STORAGE_CONTAINER_NAME="<your container name>" ./create-archive-storage.sh
+```
+
+**RESULT:** This script prints a value for `PUBLISH_RESULTS_TO`. This key/value pair should be recorded in irt-input.yml for use. See [Input Configuration](#input-configuration).
-1. One (1) Trunked Network
-1. All vlans should be greater than 500
## Execution
-1. Execute: `./irt.sh irt-input.yml`
- * Assumes irt-input.yml is in the same location as irt.sh. If in a different location provides the full file path.
+* Execute. This example assumes irt-input.yml is in the same location as irt.sh. If your file is located in a different directory, provide the full file path.
+
+```bash
+./irt.sh irt-input.yml
+```
## Results 1. A file named `summary-<cluster_name>-<timestamp>.html` is downloaded at the end of the run and contains the testing results. It can be viewed: 1. From any browser 1. Using elinks or lynx to view from the command line; for example:
- 1. `elinks summary-<cluster_name>-<timestamp>..html`
- 1. When an SAS Token is provided for the `PUBLISH_RESULTS_TO` parameter the results are uploaded to the blob container you specified. It can be previewed by navigating to the link presented to you at the end of the IRT run.
-
-### Uploading results to your own archive
-
-1. We offer a supplementary script, `create-archive-storage.sh` to allow you to set up a storage container to store your results. The script generates an SAS Token for a storage container that is valid for three days. The script creates a storage container, storage account, and resource group if they don't already exist.
- 1. The script expects the following environment variables to be defined:
- 1. RESOURCE_GROUP
- 1. SUBSCRIPTION
- 1. STORAGE_ACCOUNT_NAME
- 1. STORAGE_CONTAINER_NAME
-1. Copy the last output from the script, into your IRT YAML input. The output looks like this:
- * `PUBLISH_RESULTS_TO="<sas-token>"`
+ 1. `elinks summary-<cluster_name>-<timestamp>.html`
+ 1. If the `PUBLISH_RESULTS_TO` parameter was provided, the results are uploaded to the blob container you specified. It can be previewed by navigating to the link presented to you at the end of the IRT run.
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
This section provides the list of metrics collected from the different components.
-**Undercloud Kubernetes**
-- [kubernetes API server](#kubernetes-api-server)-- [kubernetes Services](#kubernetes-services)-- [coreDNS](#coredns)-- [etcd](#etcd)-- [calico-felix](#calico-felix)-- [calico-typha](#calico-typha)-- [containers](#kubernetes-containers)
+**Nexus Cluster**
+- [List of metrics collected in Azure Operator Nexus](#list-of-metrics-collected-in-azure-operator-nexus)
+ - [Nexus Cluster](#nexus-cluster)
+ - [***Kubernetes API server***](#kubernetes-api-server)
+ - [***calico-felix***](#calico-felix)
+ - [***calico-typha***](#calico-typha)
+ - [***Kubernetes Containers***](#kubernetes-containers)
+ - [***Kubernetes Controllers***](#kubernetes-controllers)
+ - [***coreDNS***](#coredns)
+ - [***Kubernetes Daemonset***](#kubernetes-daemonset)
+ - [***Kubernetes Deployment***](#kubernetes-deployment)
+ - [***etcD***](#etcd)
+ - [***Kubernetes Job***](#kubernetes-job)
+ - [***kubelet***](#kubelet)
+ - [***Kubernetes Node***](#kubernetes-node)
+ - [***Kubernetes Pod***](#kubernetes-pod)
+ - [***Kuberenetes StatefulSet***](#kuberenetes-statefulset)
+ - [***Virtual Machine Orchestrator***](#virtual-machine-orchestrator)
+ - [Baremetal servers](#baremetal-servers)
+ - [***node metrics***](#node-metrics)
+ - [Storage Appliances](#storage-appliances)
+ - [***pure storage***](#pure-storage)
+ - [Network Fabric Metrics](#network-fabric-metrics)
+ - [Network Devices Metrics](#network-devices-metrics)
-**Baremetal servers**
-- [node metrics](#node-metrics)
+## Nexus Cluster
+### ***Kubernetes API server***
-**Virtual Machine orchestrator**
-- [kubevirt](#kubevirt)
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|ApiserverAuditRequestsRejectedTotal|API Server|API Server Audit Requests Rejected Total|Count|Counter of API server requests rejected due to an error in the audit logging backend|Component,Pod Name|
+|ApiserverClientCertificateExpirationSecondsSum|API Server|API Server Client Certificate Expiration Seconds Sum (Preview)|Seconds|Sum of API server client certificate expiration (seconds)|Component,Pod Name|
+|ApiserverStorageDataKeyGenerationFailuresTotal|API Server|API Server Storage Data Key Generation Failures Total|Count|Total number of operations that failed Data Encryption Key (DEK) generation|Component,Pod Name|
+|ApiserverTlsHandshakeErrorsTotal|API Server|API Server TLS Handshake Errors Total (Preview)|Count|Number of requests dropped with 'TLS handshake' error|Component,Pod Name|
-**Storage Appliance**
-- [pure storage](#pure-storage)
+### ***calico-felix***
-**Network Fabric**
-- [Network Devices Metrics](#network-devices-metrics)
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|FelixActiveLocalEndpoints|Calico|Felix Active Local Endpoints|Count|Number of active endpoints on this host|Host|
+|FelixClusterNumHostEndpoints|Calico|Felix Cluster Num Host Endpoints|Count|Total number of host endpoints cluster-wide|Host|
+|FelixClusterNumHosts|Calico|Felix Cluster Number of Hosts|Count|Total number of Calico hosts in the cluster|Host|
+|FelixClusterNumWorkloadEndpoints|Calico|Felix Cluster Number of Workload Endpoints|Count|Total number of workload endpoints cluster-wide|Host|
+|FelixIntDataplaneFailures|Calico|Felix Interface Dataplane Failures|Count|Number of times dataplane updates failed and will be retried|Host|
+|FelixIpsetErrors|Calico|Felix Ipset Errors|Count|Number of 'ipset' command failures|Host|
+|FelixIpsetsCalico|Calico|Felix Ipsets Calico|Count|Number of active Calico IP sets|Host|
+|FelixIptablesRestoreErrors|Calico|Felix IP Tables Restore Errors|Count|Number of 'iptables-restore' errors|Host|
+|FelixIptablesSaveErrors|Calico|Felix IP Tables Save Errors|Count|Number of 'iptables-save' errors|Host|
+|FelixResyncState|Calico|Felix Resync State|Unspecified|Current datastore state|Host|
+|FelixResyncsStarted|Calico|Felix Resyncs Started|Count|Number of times Felix has started resyncing with the datastore|Host|
-## Undercloud Kubernetes
-### ***Kubernetes API server***
+### ***calico-typha***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|TyphaClientLatencySecsCount|Calico|Typha Client Latency Secs|Count|Per-client latency. I.e. how far behind the current state each client is.|Pod Name|
+|TyphaConnectionsAccepted|Calico|Typha Connections Accepted|Count|Total number of connections accepted over time|Pod Name|
+|TyphaConnectionsDropped|Calico|Typha Connections Dropped|Count|Total number of connections dropped due to rebalancing|Pod Name|
+|TyphaPingLatencyCount|Calico|Typha Ping Latency|Count|Round-trip ping/pong latency to client. Typha's protocol includes a regular ping/pong keepalive to verify that the connection is still up|Pod Name|
+
+### ***Kubernetes Containers***
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| apiserver_audit_requests_rejected_total | Apiserver | Count | Average | Counter of apiserver requests rejected due to an error in audit logging backend. | Cluster, Node | Yes |
-| apiserver_client_certificate_expiration_seconds_sum | Apiserver | Second | Sum | Distribution of the remaining lifetime on the certificate used to authenticate a request. | Cluster, Node | Yes |
-| apiserver_storage_data_key_generation_failures_total | Apiserver | Count | Average | Total number of failed data encryption key(DEK) generation operations. | Cluster, Node | Yes |
-| apiserver_tls_handshake_errors_total | Apiserver | Count | Average | Number of requests dropped with 'TLS handshake error from' error | Cluster, Node | Yes |
-
-### ***Kubernetes services***
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| kube_daemonset_status_current_number_scheduled | Kube Daemonset | Count | Average | Number of Daemonsets scheduled | Cluster | Yes |
-| kube_daemonset_status_desired_number_scheduled | Kube Daemonset | Count | Average | Number of daemoset replicas desired | Cluster | Yes |
-| kube_deployment_status_replicas_ready | Kube Deployment | Count | Average | Number of deployment replicas present | Cluster | Yes |
-| kube_deployment_status_replicas_available | Kube Deployment | Count | Average | Number of deployment replicas available | Cluster | Yes |
-| kube_job_status_active | Kube job - Active | Labels | Average | Number of actively running jobs | Cluster, Job | Yes |
-| kube_job_status_failed | Kube job - Failed | Labels | Average | Number of failed jobs | Cluster, Job | Yes |
-| kube_job_status_succeeded | Kube job - Succeeded | Labels | Average | Number of successful jobs | Cluster, Job | Yes |
-| kube_node_status_allocatable | Node - Allocatable | Labels | Average | The amount of resources allocatable for pods | Cluster, Node, Resource | Yes |
-| kube_node_status_capacity | Node - Capacity | Labels | Average | The total amount of resources available for a node | Cluster, Node, Resource | Yes |
-| kube_node_status_condition | Kubenode status | Labels | Average | The condition of a cluster node | Cluster, Node, Condition, Status | Yes |
-| kube_pod_container_resource_limits | Pod container - Limits | Count | Average | The number of requested limit resource by a container. | Cluster, Node, Resource, Pod | Yes |
-| kube_pod_container_resource_requests | Pod container - Requests | Count | Average | The number of requested request resource by a container. | Cluster, Node, Resource, Pod | Yes |
-| kube_pod_container_state_started | Pod container - state | Second | Average | Start time in unix timestamp for a pod container | Cluster, Node, Container | Yes |
-| kube_pod_container_status_last_terminated_reason | Pod container - state | Labels | Average | Describes the last reason the container was in terminated state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_container_status_ready | Container State | Labels | Average | Describes whether the containers readiness check succeeded | Cluster, Node, Container | Yes |
-| kube_pod_container_status_restarts_total | Container State | Count | Average | The number of container restarts per container | Cluster, Node, Container | Yes |
-| kube_pod_container_status_running | Container State | Labels | Average | Describes whether the container is currently in running state | Cluster, Node, Container | Yes |
-| kube_pod_container_status_terminated | Container State | Labels | Average | Describes whether the container is currently in terminated state | Cluster, Node, Container | Yes |
-| kube_pod_container_status_terminated_reason | Container State | Labels | Average | Describes the reason the container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_container_status_waiting | Container State | Labels | Average | Describes whether the container is currently in waiting state | Cluster, Node, Container | Yes |
-| kube_pod_container_status_waiting_reason | Container State | Labels | Average | Describes the reason the container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_deletion_timestamp | Pod Deletion Timestamp | Timestamp | NA | Unix deletion timestamp | Cluster, Pod | Yes |
-| kube_pod_init_container_status_ready | Init Container State | Labels | Average | Describes whether the init containers readiness check succeeded | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_restarts_total | Init Container State | Count | Average | The number of restarts for the init container | Cluster, Container | Yes |
-| kube_pod_init_container_status_running | Init Container State | Labels | Average | Describes whether the init container is currently in running state | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_terminated | Init Container State | Labels | Average | Describes whether the init container is currently in terminated state | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_terminated_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_init_container_status_waiting | Init Container State | Labels | Average | Describes whether the init container is currently in waiting state | Cluster, Node, Container | Yes |
-| kube_pod_init_container_status_waiting_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
-| kube_pod_status_phase | Pod Status | Labels | Average | The pods current phase | Cluster, Node, Container, Phase | Yes |
-| kube_pod_status_ready | Pod Status Ready | Count | Average | Describe whether the pod is ready to serve requests. | Cluster, Pod | Yes |
-| kube_pod_status_reason | Pod Status Reason | Labels | Average | The pod status reasons | Cluster, Node, Container, Reason | Yes |
-| kube_statefulset_replicas | Statefulset # of replicas | Count | Average | The number of desired pods for a statefulset | Cluster, Stateful Set | Yes |
-| kube_statefulset_status_replicas | Statefulset replicas status | Count | Average | The number of replicas per statefulsets | Cluster, Stateful Set | Yes |
-| controller_runtime_reconcile_errors_total | Kube Controller | Count | Average | Total number of reconciliation errors per controller | Cluster, Node, Controller | Yes |
-| controller_runtime_reconcile_total | Kube Controller | Count | Average | Total number of reconciliation per controller | Cluster, Node, Controller | Yes |
-| kubelet_running_containers | Containers - # of running | Labels | Average | Number of containers currently running | Cluster, node, Container State | Yes |
-| kubelet_running_pods | Pods - # of running | Count | Average | Number of pods that have a running pod sandbox | Cluster, Node | Yes |
-| kubelet_runtime_operations_errors_total | Kubelet Runtime Op Errors | Count | Average | Cumulative number of runtime operation errors by operation type. | Cluster, Node | Yes |
-| kubelet_volume_stats_available_bytes | Pods - Storage - Available | Byte | Average | Number of available bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
-| kubelet_volume_stats_capacity_bytes | Pods - Storage - Capacity | Byte | Average | Capacity in bytes of the volume | Cluster, Node, Persistent Volume Claim | Yes |
-| kubelet_volume_stats_used_bytes | Pods - Storage - Used | Byte | Average | Number of used bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|ContainerFsIoTimeSecondsTotal|Container|Container FS I/O Time Seconds Total (Preview)|Seconds|Time taken for container Input/Output (I/O) operations|Device,Host|
+|ContainerMemoryFailcnt|Container|Container Memory Fail Count|Count|Number of times a container's memory usage limit is hit|Container,Host,Namespace,Pod|
+|ContainerMemoryUsageBytes|Container|Container Memory Usage Bytes|Bytes|Current memory usage, including all memory regardless of when it was accessed|Container,Host,Namespace,Pod|
+|ContainerNetworkReceiveErrorsTotal|Container|Container Network Receive Errors Total (Preview)|Count|Number of errors encountered while receiving bytes over the network|Interface,Namespace,Pod|
+|ContainerNetworkTransmitErrorsTotal|Container|Container Network Transmit Errors Total (Preview)|Count|Count of errors that happened while transmitting|Interface,Namespace,Pod|
+|ContainerScrapeError|Container|Container Scrape Error|Unspecified|Indicates whether there was an error while getting container metrics|Host|
+|ContainerTasksState|Container|Container Tasks State|Count|Number of tasks or processes in a given state (sleeping, running, stopped, uninterruptible, or waiting) in a container|Container,Host,Namespace,Pod,State|
+
+### ***Kubernetes Controllers***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|ControllerRuntimeReconcileErrorsTotal|Controller|Controller Reconcile Errors Total|Count|Total number of reconciliation errors per controller|Controller,Namespace,Pod Name|
+|ControllerRuntimeReconcileTotal|Controller|Controller Reconciliations Total|Count|Total number of reconciliations per controller|Controller,Namespace,Pod Name|
### ***coreDNS***
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| coredns_dns_requests_total | DNS Requests | Count | Average | total query count | Cluster, Node, Protocol | Yes |
-| coredns_dns_responses_total | DNS response/errors | Count | Average | response per zone, rcode and plugin. | Cluster, Node, Rcode | Yes |
-| coredns_health_request_failures_total | DNS Health Request Failures | Count | Average | The number of times the internal health check loop failed to query | Cluster, Node | Yes |
-| coredns_panics_total | DNS panic | Count | Average | total number of panics | Cluster, Node | Yes |
-
-### ***etcd***
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| etcd_disk_backend_commit_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of commit called by backend. | Cluster, Pod | Yes |
-| etcd_disk_wal_fsync_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of fsync called by wal | Cluster, Pod | Yes |
-| etcd_server_is_leader | Etcd Server | Labels | Average | Whether node is leader | Cluster, Pod | Yes |
-| etcd_server_is_learner | Etcd Server | Labels | Average | Whether node is learner | Cluster, Pod | Yes |
-| etcd_server_leader_changes_seen_total | Etcd Server | Count | Average | The number of leader changes seen. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_committed_total | Etcd Server | Count | Average | The total number of consensus proposals committed. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_applied_total | Etcd Server | Count | Average | The total number of consensus proposals applied. | Cluster, Pod, Tier | Yes |
-| etcd_server_proposals_failed_total | Etcd Server | Count | Average | The total number of failed proposals seen. | Cluster, Pod, Tier | Yes |
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|CorednsDnsRequestsTotal|CoreDNS|CoreDNS Requests Total|Count|Total number of DNS requests|Family,Pod Name,Proto,Server,Type|
+|CorednsDnsResponsesTotal|CoreDNS|CoreDNS Responses Total|Count|Total number of DNS responses|Pod Name,Server,Rcode|
+|CorednsForwardHealthcheckBrokenTotal|CoreDNS|CoreDNS Forward Healthcheck Broken Total (Preview)|Count|Total number of times all upstreams are unhealthy|Pod Name,Namespace|
+|CorednsForwardMaxConcurrentRejectsTotal|CoreDNS|CoreDNS Forward Max Concurrent Rejects Total (Preview)|Count|Total number of rejected queries because concurrent queries were at the maximum limit|Pod Name,Namespace|
+|CorednsHealthRequestFailuresTotal|CoreDNS|CoreDNS Health Request Failures Total|Count|The number of times the self health check failed|Pod Name|
+|CorednsPanicsTotal|CoreDNS|CoreDNS Panics Total|Count|Total number of panics|Pod Name|
+|CorednsReloadFailedTotal|CoreDNS|CoreDNS Reload Failed Total|Count|Total number of failed reload attempts|Pod Name,Namespace|
-### ***calico-felix***
+### ***Kubernetes Daemonset***
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| felix_ipsets_calico | Felix | Count | Average | Number of active Calico IP sets. | Cluster, Node | Yes |
-| felix_cluster_num_host_endpoints | Felix | Count | Average | Total number of host endpoints cluster-wide. | Cluster, Node | Yes |
-| felix_active_local_endpoints | Felix | Count | Average | Number of active endpoints on this host. | Cluster, Node | Yes |
-| felix_cluster_num_hosts | Felix | Count | Average | Total number of Calico hosts in the cluster. | Cluster, Node | Yes |
-| felix_cluster_num_workload_endpoints | Felix | Count | Average | Total number of workload endpoints cluster-wide. | Cluster, Node | Yes |
-| felix_int_dataplane_failures | Felix | Count | Average | Number of times dataplane updates failed and will be retried. | Cluster, Node | Yes |
-| felix_ipset_errors | Felix | Count | Average | Number of ipset command failures. | Cluster, Node | Yes |
-| felix_iptables_restore_errors | Felix | Count | Average | Number of iptables-restore errors. | Cluster, Node | Yes |
-| felix_iptables_save_errors | Felix | Count | Average | Number of iptables-save errors. | Cluster, Node | Yes |
-| felix_resyncs_started | Felix | Count | Average | Number of times Felix has started resyncing with the datastore. | Cluster, Node | Yes |
-| felix_resync_state | Felix | Count | Average | Current datastore state. | Cluster, Node | Yes |
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubeDaemonsetStatusCurrentNumberScheduled|Daemonset|Daemonsets Current Number Scheduled|Count|Number of daemonsets currently scheduled|Daemonset,Namespace|
+|KubeDaemonsetStatusDesiredNumberScheduled|Daemonset|Daemonsets Desired Number Scheduled|Count|Number of daemonsets desired scheduled|Daemonset,Namespace|
-### ***calico-typha***
+### ***Kubernetes Deployment***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubeDeploymentStatusReplicasAvailable|Deployment|Deployment Replicas Available|Count|Number of deployment replicas available|Deployment,Namespace|
+|KubeDeploymentStatusReplicasReady|Deployment|Deployment Replicas Ready|Count|Number of deployment replicas ready|Deployment,Namespace|
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| typha_connections_accepted | Typha | Count | Average | Total number of connections accepted over time. | Cluster, Node | Yes |
-| typha_connections_dropped | Typha | Count | Average | Total number of connections dropped due to rebalancing. | Cluster, Node | Yes |
-| typha_ping_latency_count | Typha | Count | Average | Round-trip ping latency to client. | Cluster, Node | Yes |
+### ***etcD***
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|EtcdDiskBackendCommitDurationSecondsSum|Etcd|Etcd Disk Backend Commit Duration Seconds Sum|Seconds|The latency distribution of commits called by the backend|Component,Pod Name,Tier|
+|EtcdDiskWalFsyncDurationSecondsSum|Etcd|Etcd Disk WAL Fsync Duration Seconds Sum|Seconds|The sum of latency distributions of 'fsync' called by the write-ahead log (WAL)|Component,Pod Name,Tier|
+|EtcdServerHealthFailures|Etcd|Etcd Server Health Failures|Count|Total server health failures|Pod Name|
+|EtcdServerIsLeader|Etcd|Etcd Server Is Leader|Unspecified|Whether or not this member is a leader; 1 if is, 0 otherwise|Component,Pod Name,Tier|
+|EtcdServerIsLearner|Etcd|Etcd Server Is Learner|Unspecified|Whether or not this member is a learner; 1 if is, 0 otherwise|Component,Pod Name,Tier|
+|EtcdServerLeaderChangesSeenTotal|Etcd|Etcd Server Leader Changes Seen Total|Count|The number of leader changes seen|Component,Pod Name,Tier|
+|EtcdServerProposalsAppliedTotal|Etcd|Etcd Server Proposals Applied Total|Count|The total number of consensus proposals applied|Component,Pod Name,Tier|
+|EtcdServerProposalsCommittedTotal|Etcd|Etcd Server Proposals Committed Total|Count|The total number of consensus proposals committed|Component,Pod Name,Tier|
+|EtcdServerProposalsFailedTotal|Etcd|Etcd Server Proposals Failed Total|Count|The total number of failed proposals|Component,Pod Name,Tier|
+|EtcdServerSlowApplyTotal|Etcd|Etcd Server Slow Apply Total (Preview)|Count|The total number of slow apply requests|Pod Name,Tier|
-### ***Kubernetes containers***
+### ***Kubernetes Job***
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| container_fs_io_time_seconds_total | Containers - Filesystem | Second | Average | Cumulative count of seconds spent doing I/Os | Cluster, Node, Pod+Container+Interface | Yes |
-| container_memory_failcnt | Containers - Memory | Count | Average | Number of memory usage hits limits | Cluster, Node, Pod+Container+Interface | Yes |
-| container_memory_usage_bytes | Containers - Memory | Byte | Average | Current memory usage, including all memory regardless of when it was accessed | Cluster, Node, Pod+Container+Interface | Yes |
-| container_tasks_state | Containers - Task state | Labels | Average | Number of tasks in given state | Cluster, Node, Pod+Container+Interface, State | Yes |
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubeJobStatusActive|Job|Jobs Active|Count|Number of jobs active|Job,Namespace|
+|KubeJobStatusFailed|Job|Jobs Failed|Count|Number and reason of jobs failed|Job,Namespace,Reason|
+|KubeJobStatusSucceeded|Job|Jobs Succeeded|Count|Number of jobs succeeded|Job,Namespace|
+
+### ***kubelet***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubeletRunningContainers|Kubelet|Kubelet Running Containers|Count|Number of containers currently running|Container State,Host|
+|KubeletRunningPods|Kubelet|Kubelet Running Pods|Count|Number of pods running on the node|Host|
+|KubeletRuntimeOperationsErrorsTotal|Kubelet|Kubelet Runtime Operations Errors Total|Count|Cumulative number of runtime operation errors by operation type|Host,Operation Type|
+|KubeletStartedPodsErrorsTotal|Kubelet|Kubelet Started Pods Errors Total|Count|Cumulative number of errors when starting pods|Host|
+|KubeletVolumeStatsAvailableBytes|Kubelet|Volume Available Bytes|Bytes|Number of available bytes in the volume|Host,Namespace,Persistent Volume Claim|
+|KubeletVolumeStatsCapacityBytes|Kubelet|Volume Capacity Bytes|Bytes|Capacity (in bytes) of the volume|Host,Namespace,Persistent Volume Claim|
+|KubeletVolumeStatsUsedBytes|Kubelet|Volume Used Bytes|Bytes|Number of used bytes in the volume|Host,Namespace,Persistent Volume Claim|
+
+### ***Kubernetes Node***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubeNodeStatusAllocatable|Node|Node Resources Allocatable|Count|Node resources allocatable for pods|Node,resource,unit|
+|KubeNodeStatusCapacity|Node|Node Resources Capacity|Count|Total amount of node resources available|Node,resource,unit|
+|KubeNodeStatusCondition|Node|Node Status Condition|Count|The condition of a node|Condition,Node,Status|
+
+### ***Kubernetes Pod***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubePodContainerResourceLimits|Pod|Container Resources Limits|Count|The container's resources limits|Container,Namespace,Node,Pod,Resource,Unit|
+|KubePodContainerResourceRequests|Pod|Container Resources Requests|Count|The container's resources requested|Container,Namespace,Node,Pod,Resource,Unit|
+|KubePodContainerStateStarted|Pod|Container State Started (Preview)|Count|Unix timestamp start time of a container|Container,Namespace,Pod|
+|KubePodContainerStatusLastTerminatedReason|Pod|Container Status Last Terminated Reason|Count|The reason of a container's last terminated status|Container,Namespace,Pod,Reason|
+|KubePodContainerStatusReady|Pod|Container Status Ready|Count|Describes whether the container's readiness check succeeded|Container,Namespace,Pod|
+|KubePodContainerStatusRestartsTotal|Pod|Container Restarts|Count|The number of container restarts|Container,Namespace,Pod|
+|KubePodContainerStatusRunning|Pod|Container Status Running|Count|The number of containers with a status of 'running'|Container,Namespace,Pod|
+|KubePodContainerStatusTerminated|Pod|Container Status Terminated|Count|The number of containers with a status of 'terminated'|Container,Namespace,Pod|
+|KubePodContainerStatusTerminatedReason|Pod|Container Status Terminated Reason|Count|The number and reason of containers with a status of 'terminated'|Container,Namespace,Pod,Reason|
+|KubePodContainerStatusWaiting|Pod|Container Status Waiting|Count|The number of containers with a status of 'waiting'|Container,Namespace,Pod|
+|KubePodContainerStatusWaitingReason|Pod|Container Status Waiting Reason|Count|The number and reason of containers with a status of 'waiting'|Container,Namespace,Pod,Reason|
+|KubePodDeletionTimestamp|Pod|Pod Deletion Timestamp (Preview)|Count|The timestamp of the pod's deletion|Namespace,Pod|
+|KubePodInitContainerStatusReady|Pod|Pod Init Container Ready|Count|The number of ready pod init containers|Namespace,Container,Pod|
+|KubePodInitContainerStatusRestartsTotal|Pod|Pod Init Container Restarts|Count|The number of pod init containers restarts|Namespace,Container,Pod|
+|KubePodInitContainerStatusRunning|Pod|Pod Init Container Running|Count|The number of running pod init containers|Namespace,Container,Pod|
+|KubePodInitContainerStatusTerminated|Pod|Pod Init Container Terminated|Count|The number of terminated pod init containers|Namespace,Container,Pod|
+|KubePodInitContainerStatusTerminatedReason|Pod|Pod Init Container Terminated Reason|Count|The number of pod init containers with terminated reason|Namespace,Container,Pod,Reason|
+|KubePodInitContainerStatusWaiting|Pod|Pod Init Container Waiting|Count|The number of pod init containers waiting|Namespace,Container,Pod|
+|KubePodInitContainerStatusWaitingReason|Pod|Pod Init Container Waiting Reason|Count|The reason the pod init container is waiting|Namespace,Container,Pod,Reason|
+|KubePodStatusPhase|Pod|Pod Status Phase|Count|The pod status phase|Namespace,Pod,Phase|
+|KubePodStatusReady|Pod|Pod Ready State|Count|Signifies if the pod is in ready state|Namespace,Pod|
+|KubePodStatusReason|Pod|Pod Status Reason|Count|The pod status reason <Evicted\|NodeAffinity\|NodeLost\|Shutdown\|UnexpectedAdmissionError>|Namespace,Pod,Reason|
+
+### ***Kuberenetes StatefulSet***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubeStatefulsetReplicas|Statefulset|Statefulset Desired Replicas Number|Count|The desired number of statefulset replicas|Namespace,Statefulset|
+|KubeStatefulsetStatusReplicas|Statefulset|Statefulset Replicas Number|Count|The number of replicas per statefulset|Namespace,Statefulset|
+
+### ***Virtual Machine Orchestrator***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|KubevirtInfo|VMOrchestrator|Kubevirt Info|Unspecified|Kubevirt version information|Kube Version|
+|KubevirtVirtControllerLeading|VMOrchestrator|Kubevirt Virt Controller Leading|Unspecified|Indication for an operating virt-controller|Pod Name|
+|KubevirtVirtControllerReady|VMOrchestrator|Kubevirt Virt Controller Ready|Unspecified|Indication for a virt-controller that is ready to take the lead|Pod Name|
+|KubevirtVirtOperatorReady|VMOrchestrator|Kubevirt Virt Operator Ready|Unspecified|Indication for a virt operator being ready|Pod Name|
+|KubevirtVmiMemoryActualBalloonBytes|VMOrchestrator|Kubevirt VMI Memory Actual BalloonBytes|Bytes|Current balloon size (in bytes)|Name,Node|
+|KubevirtVmiMemoryAvailableBytes|VMOrchestrator|Kubevirt VMI Memory Available Bytes|Bytes|Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages|Name,Node|
+|KubevirtVmiMemorySwapInTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Memory Swap In Traffic Bytes Total|Bytes|The total amount of data read from swap space of the guest (in bytes)|Name,Node|
+|KubevirtVmiMemoryDomainBytesTotal|VMOrchestrator|Kubevirt VMI Memory Domain Bytes Total (Preview)|Bytes|The amount of memory (in bytes) allocated to the domain. The memory value in domain XML file|Node|
+|KubevirtVmiMemorySwapOutTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Memory Swap Out Traffic Bytes Total|Bytes|The total amount of memory written out to swap space of the guest (in bytes)|Name,Node|
+|KubevirtVmiMemoryUnusedBytes|VMOrchestrator|Kubevirt VMI Memory Unused Bytes|Bytes|The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free|Name,Node|
+|KubevirtVmiNetworkReceivePacketsTotal|VMOrchestrator|Kubevirt VMI Network Receive Packets Total|Bytes|Total network traffic received packets|Interface,Name,Node|
+|KubevirtVmiNetworkTransmitPacketsDroppedTotal|VMOrchestrator|Kubevirt VMI Network Transmit Packets Dropped Total|Bytes|The total number of transmit packets dropped on virtual NIC (vNIC) interfaces|Interface,Name,Node|
+|KubevirtVmiNetworkTransmitPacketsTotal|VMOrchestrator|Kubevirt VMI Network Transmit Packets Total|Bytes|Total network traffic transmitted packets|Interface,Name,Node|
+|KubevirtVmiOutdatedCount|VMOrchestrator|Kubevirt VMI Outdated Count|Count|Indication for the total number of VirtualMachineInstance (VMI) workloads that are not running within the most up-to-date version of the virt-launcher environment|Name|
+|KubevirtVmiPhaseCount|VMOrchestrator|Kubevirt VMI Phase Count|Count|Sum of VirtualMachineInstances (VMIs) per phase and node|Node,Phase,Workload|
+|KubevirtVmiStorageIopsReadTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Read Total|Count|Total number of Input/Output (I/O) read operations|Drive,Name,Node|
+|KubevirtVmiStorageIopsWriteTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Write Total|Count|Total number of Input/Output (I/O) write operations|Drive,Name,Node|
+|KubevirtVmiStorageReadTimesMsTotal|VMOrchestrator|Kubevirt VMI Storage Read Times Total (Preview)|Milliseconds|Total time in milliseconds (ms) spent on read operations|Drive,Name,Node|
+|KubevirtVmiStorageWriteTimesMsTotal|VMOrchestrator|Kubevirt VMI Storage Write Times Total (Preview)|Milliseconds|Total time in milliseconds (ms) spent on write operations|Drive,Name,Node|
+|NcVmiCpuAffinity|Network Cloud|CPU Pinning Map (Preview)|Count|Pinning map of virtual CPUs (vCPUs) to CPUs|CPU,NUMA Node,VMI Namespace,VMI Node,VMI Name|
## Baremetal servers ### ***node metrics***
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| node_boot_time_seconds | Node - Boot time | Second | Average | Unix time of last boot | Cluster, Node | Yes |
-| node_cpu_seconds_total | Node - CPU | Second | Average | CPU usage | Cluster, Node, CPU, Mode | Yes |
-| node_disk_read_time_seconds_total | Node - Disk - Read Time | Second | Average | Disk read time | Cluster, Node, Device | Yes |
-| node_disk_reads_completed_total | Node - Disk - Read Completed | Count | Average | Disk reads completed | Cluster, Node, Device | Yes |
-| node_disk_write_time_seconds_total | Node - Disk - Write Time | Second | Average | Disk write time | Cluster, Node, Device | Yes |
-| node_disk_writes_completed_total | Node - Disk - Write Completed | Count | Average | Disk writes completed | Cluster, Node, Device | Yes |
-| node_entropy_available_bits | Node - Entropy Available | Bits | Average | Available node entropy | Cluster, Node | Yes |
-| node_filesystem_avail_bytes | Node - Disk - Available (TBD) | Byte | Average | Available filesystem size | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_free_bytes | Node - Disk - Free (TBD) | Byte | Average | Free filesystem size | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_size_bytes | Node - Disk - Size | Byte | Average | Filesystem size | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_files | Node - Disk - Files | Count | Average | Total number of permitted inodes | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_files_free | Node - Disk - Files Free | Count | Average | Total number of free inodes | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_device_error | Node - Disk - FS Device error | Count | Average | indicates if there was a problem getting information for the filesystem | Cluster, Node, Mountpoint | Yes |
-| node_filesystem_readonly | Node - Disk - Files Readonly | Count | Average | indicates if the filesystem is readonly | Cluster, Node, Mountpoint | Yes |
-| node_hwmon_temp_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for temperature | Cluster, Node, Chip, Sensor | Yes |
-| node_hwmon_temp_max_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for maximum temperature | Cluster, Node, Chip, Sensor | Yes |
-| node_load1 | Node - Memory | Second | Average | 1m load average. | Cluster, Node | Yes |
-| node_load15 | Node - Memory | Second | Average | 15m load average. | Cluster, Node | Yes |
-| node_load5 | Node - Memory | Second | Average | 5m load average. | Cluster, Node | Yes |
-| node_memory_HardwareCorrupted_bytes | Node - Memory | Byte | Average | Memory information field HardwareCorrupted_bytes. | Cluster, Node | Yes |
-| node_memory_MemAvailable_bytes | Node - Memory | Byte | Average | Memory information field MemAvailable_bytes. | Cluster, Node | Yes |
-| node_memory_MemFree_bytes | Node - Memory | Byte | Average | Memory information field MemFree_bytes. | Cluster, Node | Yes |
-| node_memory_MemTotal_bytes | Node - Memory | Byte | Average | Memory information field MemTotal_bytes. | Cluster, Node | Yes |
-| node_memory_numa_HugePages_Free | Node - Memory | Byte | Average | Free hugepages | Cluster, Node. NUMA | Yes |
-| node_memory_numa_HugePages_Total | Node - Memory | Byte | Average | Total hugepages | Cluster, Node. NUMA | Yes |
-| node_memory_numa_MemFree | Node - Memory | Byte | Average | Numa memory free | Cluster, Node. NUMA | Yes |
-| node_memory_numa_MemTotal | Node - Memory | Byte | Average | Total Numa memory | Cluster, Node. NUMA | Yes |
-| node_memory_numa_MemUsed | Node - Memory | Byte | Average | Numa memory used | Cluster, Node. NUMA | Yes |
-| node_memory_numa_Shmem | Node - Memory | Byte | Average | Shared memory | Cluster, Node | Yes |
-| node_os_info | Node - OS Info | Labels | Average | OS details | Cluster, Node | Yes |
-| node_network_carrier_changes_total | Node Network - Carrier changes | Count | Average | carrier_changes_total value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_receive_packets_total | NodeNetwork - receive packets | Count | Average | Network device statistic receive_packets. | Cluster, node, Device | Yes |
-| node_network_transmit_packets_total | NodeNetwork - transmit packets | Count | Average | Network device statistic transmit_packets. | Cluster, node, Device | Yes |
-| node_network_up | Node Network - Interface state | Labels | Average | Value is 1 if operstate is 'up', 0 otherwise. | Cluster, node, Device | Yes |
-| node_network_mtu_bytes | Network Interface - MTU | Byte | Average | mtu_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_receive_errs_total | Network Interface - Error totals | Count | Average | Network device statistic receive_errs | Cluster, node, Device | Yes |
-| node_network_receive_multicast_total | Network Interface - Multicast | Count | Average | Network device statistic receive_multicast. | Cluster, node, Device | Yes |
-| node_network_speed_bytes | Network Interface - Speed | Byte | Average | speed_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
-| node_network_transmit_errs_total | Network Interface - Error totals | Count | Average | Network device statistic transmit_errs. | Cluster, node, Device | Yes |
-| node_timex_sync_status | Node Timex | Labels | Average | Is clock synchronized to a reliable server (1 = yes, 0 = no). | Cluster, Node | Yes |
-| node_timex_maxerror_seconds | Node Timex | Second | Average | Maximum error in seconds. | Cluster, Node | Yes |
-| node_timex_offset_seconds | Node Timex | Second | Average | Time offset in between local system and reference clock. | Cluster, Node | Yes |
-| node_vmstat_oom_kill | Node VM Stat | Count | Average | /proc/vmstat information field oom_kill. | Cluster, Node | Yes |
-| node_vmstat_pswpin | Node VM Stat | Count | Average | /proc/vmstat information field pswpin. | Cluster, Node | Yes |
-| node_vmstat_pswpout | Node VM Stat | Count | Average | /proc/vmstat information field pswpout | Cluster, Node | Yes |
-| node_dmi_info | Node Bios Information | Labels | Average | Node environment information | Cluster, Node | Yes |
-| node_time_seconds | Node - Time | Second | NA | System time in seconds since epoch (1970) | Cluster, Node | Yes |
-| idrac_power_input_watts | Node - Power | Watt | Average | Power Input | Cluster, Node, PSU | Yes |
-| idrac_power_output_watts | Node - Power | Watt | Average | Power Output | Cluster, Node, PSU | Yes |
-| idrac_power_capacity_watts | Node - Power | Watt | Average | Power Capacity | Cluster, Node, PSU | Yes |
-| idrac_sensors_temperature | Node - Temperature | Celcius | Average | Idrac sensor Temperature | Cluster, Node, Name | Yes |
-| idrac_power_on | Node - Power | Labels | Average | Idrac Power On Status | Cluster, Node | Yes |
-
-## Virtual Machine orchestrator
-### ***kubevirt***
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| kubevirt_info | Host | Labels | NA | Version information. | Cluster, Node | Yes |
-| kubevirt_virt_controller_leading | Kubevirt Controller | Labels | Average | Indication for an operating virt-controller. | Cluster, Pod | Yes |
-| kubevirt_virt_operator_ready | Kubevirt Operator | Labels | Average | Indication for a virt operator being ready | Cluster, Pod | Yes |
-| kubevirt_vmi_cpu_affinity | VM-CPU | Labels | Average | Details the cpu pinning map via boolean labels in the form of vcpu_X_cpu_Y. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_actual_balloon_bytes | VM-Memory | Byte | Average | Current balloon size in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_domain_total_bytes | VM-Memory | Byte | Average | The amount of memory in bytes allocated to the domain. The memory value in domain xml file | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_swap_in_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of data read from swap space of the guest in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_swap_out_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of memory written out to swap space of the guest in bytes. | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_available_bytes | VM-Memory | Byte | Average | Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages | Cluster, Node, VM | Yes |
-| kubevirt_vmi_memory_unused_bytes | VM-Memory | Byte | Average | The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free | Cluster, Node, VM | Yes |
-| kubevirt_vmi_network_receive_packets_total | VM-Network | Count | Average | Total network traffic received packets. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_network_transmit_packets_total | VM-Network | Count | Average | Total network traffic transmitted packets. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_network_transmit_packets_dropped_total | VM-Network | Count | Average | The total number of tx packets dropped on vNIC interfaces. | Cluster, Node, VM, Interface | Yes |
-| kubevirt_vmi_outdated_count | VMI | Count | Average | Indication for the total number of VirtualMachineInstance workloads that are not running within the most up-to-date version of the virt-launcher environment. | Cluster, Node, VM, Phase | Yes |
-| kubevirt_vmi_phase_count | VMI | Count | Average | Sum of VMIs per phase and node. | Cluster, Node, VM, Phase | Yes |
-| kubevirt_vmi_storage_iops_read_total | VM-Storage | Count | Average | Total number of I/O read operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_iops_write_total | VM-Storage | Count | Average | Total number of I/O write operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_read_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on read operations. | Cluster, Node, VM, Drive | Yes |
-| kubevirt_vmi_storage_write_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on write operations | Cluster, Node, VM, Drive | Yes |
-| kubevirt_virt_controller_ready | Kubevirt Controller | Labels | Average | Indication for a virt-controller that is ready to take the lead. | Cluster, Pod | Yes |
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+HostDiskReadCompleted|Disk|Host Disk Reads Completed|Count|Disk reads completed by node|Device,Host|
+HostDiskReadSeconds|Disk|Host Disk Read Seconds (Preview)|Seconds|Disk read time by node|Device,Host|
+HostDiskWriteCompleted|Disk|Total Number of Writes Completed|Count|Disk writes completed by node|Device,Host|
+HostDiskWriteSeconds|Disk|Host Disk Write Seconds (Preview)|Seconds|Disk write time by node|Device,Host|
+HostDmiInfo|System|Host DMI Info (Preview)|Unspecified|Host Desktop Management Interface (DMI) environment information|Bios Date,Bios Release,Bios Vendor,Bios Version,Board Asset Tag,Board Name,Board Vendor,Board Version,Chassis Asset Tag,Chassis Vendor,Chassis Version,Host,Product Family,Product Name,Product Sku,Product Uuid,Product Version,System Vendor|
+HostEntropyAvailableBits|Filesystem|Host Entropy Available Bits (Preview)|Count|Available bits in node entropy|Host|
+HostFilesystemAvailBytes|Filesystem|Host Filesystem Available Bytes|Count|Available filesystem size by node|Device,FS Type,Host,Mount Point|
+HostFilesystemDeviceError|Filesystem|Host Filesystem Device Errors|Count|Indicates if there was a problem getting information for the filesystem|Device,FS Type,Host,Mount Point|
+HostFilesystemFiles|Filesystem|Host Filesystem Files|Count|Total number of permitted inodes|Device,FS Type,Host,Mount Point|
+HostFilesystemFilesFree|Filesystem|Total Number of Free inodes|Count|Total number of free inodes|Device,FS Type,Host,Mount Point|
+HostFilesystemReadOnly|Filesystem|Host Filesystem Read Only|Unspecified|Indicates if the filesystem is readonly|Device,FS Type,Host,Mount Point|
+HostFilesystemSizeBytes|Filesystem|Host Filesystem Size In Bytes|Count|Filesystem size by node|Device,FS Type,Host,Mount Point|
+HostHwmonTempCelsius|HardwareMonitor|Host Hardware Monitor Temp|Count|Hardware monitor for temperature (celsius)|Chip,Host,Sensor|
+HostHwmonTempMax|HardwareMonitor|Host Hardware Monitor Temp Max|Count|Hardware monitor for maximum temperature (celsius)|Chip,Host,Sensor|
+HostLoad1|Memory|Average Load In 1 Minute (Preview)|Count|1 minute load average|Host|
+HostLoad15|Memory|Average Load In 15 Minutes (Preview)|Count|15 minute load average|Host|
+HostLoad5|Memory|Average load in 5 minutes (Preview)|Count|5 minute load average|Host|
+HostMemAvailBytes|Memory|Host Memory Available Bytes|Count|Available memory in bytes by node|Host|
+HostMemHWCorruptedBytes|Memory|Total Amount of Memory In Corrupted Pages|Count|Corrupted bytes in hardware by node|Host|
+HostMemTotalBytes|Memory|Host Memory Total Bytes|Bytes|Total bytes of memory by node|Host|
+HostSpecificCPUUtilization|CPU|Host Specific CPU Utilization (Preview)|Seconds|A counter metric that counts the number of seconds the CPU has been running in a particular mode|Cpu,Host,Mode|
+IdracPowerCapacityWatts|HardwareMonitor|IDRAC Power Capacity Watts|Unspecified|Power Capacity|Host,PSU|
+IdracPowerInputWatts|HardwareMonitor|IDRAC Power Input Watts|Unspecified|Power Input|Host,PSU|
+IdracPowerOn|HardwareMonitor|IDRAC Power On|Unspecified|IDRAC Power On Status|Host|
+IdracPowerOutputWatts|HardwareMonitor|IDRAC Power Output Watts|Unspecified|Power Output|Host,PSU|
+IdracSensorsTemperature|HardwareMonitor|IDRAC Sensors Temperature|Unspecified|IDRAC sensor temperature|Host,Name,Units|
+NcNodeNetworkReceiveErrsTotal|Network|Network Device Receive Errors|Count|Total network device errors received|Hostname,Interface Name|
+NcNodeNetworkTransmitErrsTotal|Network|Network Device Transmit Errors|Count|Total network device errors transmitted|Hostname,Interface Name|
+NcTotalCpusPerNuma|CPU|Total CPUs Available to Nexus per NUMA|Count|Total number of CPUs available to Nexus per NUMA|Hostname,NUMA Node|
+NcTotalWorkloadCpusAllocatedPerNuma|CPU|CPUs per NUMA Allocated for Nexus Kubernetes|Count|Total number of CPUs per NUMA allocated for Nexus Kubernetes and Tenant Workloads|Hostname,NUMA Node|
+NcTotalWorkloadCpusAvailablePerNuma|CPU|CPUs per NUMA Available for Nexus Kubernetes|Count|Total number of CPUs per NUMA available to Nexus Kubernetes and Tenant Workloads|Hostname,NUMA Node|
+NodeBondingActive|Network|Node Bonding Active (Preview)|Count|Number of active interfaces per bonding interface|Primary|
+NodeMemHugePagesFree|Memory|Node Memory Huge Pages Free (Preview)|Bytes|NUMA hugepages free by node|Host,Node|
+NodeMemHugePagesTotal|Memory|Node Memory Huge Pages Total|Bytes|NUMA huge pages total by node|Host,Node|
+NodeMemNumaFree|Memory|Node Memory NUMA (Free Memory)|Bytes|NUMA memory free|Name,Host|
+NodeMemNumaShem|Memory|Node Memory NUMA (Shared Memory)|Bytes|NUMA shared memory|Host,Node|
+NodeMemNumaUsed|Memory|Node Memory NUMA (Used Memory)|Bytes|NUMA memory used|Host,Node|
+NodeNetworkCarrierChanges|Network|Node Network Carrier Changes|Count|Node network carrier changes|Device,Host|
+NodeNetworkMtuBytes|Network|Node Network Maximum Transmission Unit Bytes|Bytes|Node network Maximum Transmission Unit (mtu_bytes) value of /sys/class/net/\<iface\>|Device,Host|
+NodeNetworkReceiveMulticastTotal|Network|Node Network Received Multicast Total|Bytes|Network device statistic receive_multicast|Device,Host|
+NodeNetworkReceivePackets|Network|Node Network Received Packets|Count|Network device statistic receive_packets|Device,Host|
+NodeNetworkSpeedBytes|Network|Node Network Speed Bytes|Bytes|speed_bytes value of /sys/class/net/\<iface\>|Device,Host|
+NodeNetworkTransmitPackets|Network|Node Network Transmited Packets|Count|Network device statistic transmit_packets|Device,Host|
+NodeNetworkUp|Network|Node Network Up|Count|Value is 1 if operstate is 'up', 0 otherwise.|Device,Host|
+NodeNvmeInfo|Disk|Node NVMe Info (Preview)|Count|Non-numeric data from /sys/class/nvme/\<device\>, value is always 1. Provides firmware, model, state and serial for a device|Device,State|
+NodeOsInfo|System|Node OS Info|Count|Node OS information|Host,Name,Version|
+NodeTimexMaxErrorSeconds|System|Node Timex Max Error Seconds|Seconds|Maximum time error between the local system and reference clock|Host|
+NodeTimexOffsetSeconds|System|Node Timex Offset Seconds|Seconds|Time offset in between the local system and reference clock|Host|
+NodeTimexSyncStatus|System|Node Timex Sync Status|Count|Is clock synchronized to a reliable server (1 = yes, 0 = no)|Host|
+NodeVmOomKill|VM Stat|Node VM Out Of Memory Kill|Count|Information in /proc/vmstat pertaining to the field oom_kill|Host|
+NodeVmstatPswpIn|VM Stat|Node VM PSWP In|Count|Information in /proc/vmstat pertaining to the field pswpin|Host|
+NodeVmstatPswpout|VM Stat|Node VM PSWP Out|Count|Information in /proc/vmstat pertaining to the field pswpout|Host|
## Storage Appliances ### ***pure storage***
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
-|-|:-:|:--:|:-:|-|::|:-:|
-| purefa_hardware_component_health | FlashArray | Labels | NA | FlashArray hardware component health status | Cluster, Appliance, Controller+Component+Index | Yes |
-| purefa_hardware_power_volts | FlashArray | Volt | Average | FlashArray hardware power supply voltage | Cluster, Power Supply, Appliance | Yes |
-| purefa_volume_performance_throughput_bytes | Volume | Byte | Average | FlashArray volume throughput | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_space_datareduction_ratio | Volume | Count | Average | FlashArray volumes data reduction ratio | Cluster, Volume, Appliance | Yes |
-| purefa_hardware_temperature_celsius | FlashArray | Celcius | Average | FlashArray hardware temperature sensors | Cluster, Controller, Sensor, Appliance | Yes |
-| purefa_alerts_total | FlashArray | Count | Average | Number of alert events | Cluster, Severity | Yes |
-| purefa_array_performance_iops | FlashArray | Count | Average | FlashArray IOPS | Cluster, Dimension, Appliance | Yes |
-| purefa_array_performance_qdepth | FlashArray | Count | Average | FlashArray queue depth | Cluster, Appliance | Yes |
-| purefa_info | FlashArray | Labels | NA | FlashArray host volumes connections | Cluster, Array | Yes |
-| purefa_volume_performance_latency_usec | Volume | MicroSecond | Average | FlashArray volume IO latency | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_space_bytes | Volume | Byte | Average | FlashArray allocated space | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_performance_iops | Volume | Count | Average | FlashArray volume IOPS | Cluster, Volume, Dimension, Appliance | Yes |
-| purefa_volume_space_size_bytes | Volume | Byte | Average | FlashArray volumes size | Cluster, Volume, Appliance | Yes |
-| purefa_array_performance_latency_usec | FlashArray | MicroSecond | Average | FlashArray latency | Cluster, Dimension, Appliance | Yes |
-| purefa_array_space_used_bytes | FlashArray | Byte | Average | FlashArray overall used space | Cluster, Dimension, Appliance | Yes |
-| purefa_array_performance_bandwidth_bytes | FlashArray | Byte | Average | FlashArray bandwidth | Cluster, Dimension, Appliance | Yes |
-| purefa_array_performance_avg_block_bytes | FlashArray | Byte | Average | FlashArray avg block size | Cluster, Dimension, Appliance | Yes |
-| purefa_array_space_datareduction_ratio | FlashArray | Count | Average | FlashArray overall data reduction | Cluster, Appliance | Yes |
-| purefa_array_space_capacity_bytes | FlashArray | Byte | Average | FlashArray overall space capacity | Cluster, Appliance | Yes |
-| purefa_array_space_provisioned_bytes | FlashArray | Byte | Average | FlashArray overall provisioned space | Cluster, Appliance | Yes |
-| purefa_host_space_datareduction_ratio | Host | Count | Average | FlashArray host volumes data reduction ratio | Cluster, Node, Appliance | Yes |
-| purefa_host_space_size_bytes | Host | Byte | Average | FlashArray host volumes size | Cluster, Node, Appliance | Yes |
-| purefa_host_performance_latency_usec | Host | MicroSecond | Average | FlashArray host IO latency | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_performance_bandwidth_bytes | Host | Byte | Average | FlashArray host bandwidth | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_space_bytes | Host | Byte | Average | FlashArray host volumes allocated space | Cluster, Node, Dimension, Appliance | Yes |
-| purefa_host_performance_iops | Host | Count | Average | FlashArray host IOPS | Cluster, Node, Dimension, Appliance | Yes |
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+PurefaAlertsTotal|Storage Array|Nexus Storage Alerts Total|Count|Number of alert events|Severity|
+PurefaArrayPerformanceAvgBlockBytes|Storage Array|Nexus Storage Array Avg Block Bytes|Bytes|Average block size|Dimension|
+PurefaArrayPerformanceBandwidthBytes|Storage Array|Nexus Storage Array Bandwidth Bytes|Bytes|Array throughput in bytes per second|Dimension|
+PurefaArrayPerformanceIOPS|Storage Array|Nexus Storage Array IOPS|Count|Storage array IOPS|Dimension|
+PurefaArrayPerformanceLatencyUsec|Storage Array|Nexus Storage Array Latency (Microseconds)|MilliSeconds|Storage array latency in microseconds|Dimension|
+PurefaArrayPerformanceQdepth|Storage Array|Nexus Storage Array Queue Depth|Bytes|Storage array queue depth|
+PurefaArraySpaceCapacityBytes|Storage Array|Nexus Storage Array Capacity Bytes|Bytes|Storage array overall space capacity|
+PurefaArraySpaceDatareductionRatio|Storage Array|Nexus Storage Array Space Datareduction Ratio|Percent|Storage array overall data reduction|
+PurefaArraySpaceProvisionedBytes|Storage Array|Nexus Storage Array Space Provisioned Bytes|Bytes|Storage array overall provisioned space|
+PurefaArraySpaceUsedBytes|Storage Array|Nexus Storage Array Space Used Bytes|Bytes|Storage Array overall used space|Dimension|
+PurefaHardwareComponentHealth|Storage Array|Nexus Storage Hardware Component Health|Count|Storage array hardware component health status|Component,Controller,Index|
+PurefaHardwarePowerVolts|Storage Array|Nexus Storage Hardware Power Volts|Unspecified|Storage array hardware power supply voltage|Power Supply|
+PurefaHardwareTemperatureCelsius|Storage Array|Nexus Storage Hardware Temperature Celsius|Unspecified|Storage array hardware temperature sensors|Controller,Sensor|
+PurefaHostPerformanceBandwidthBytes|Host|Nexus Storage Host Bandwidth Bytes|Bytes|Storage array host bandwidth in bytes per second|Dimension,Host|
+PurefaHostPerformanceIOPS|Host|Nexus Storage Host IOPS|Count|Storage array host IOPS|Dimension,Host|
+PurefaHostPerformanceLatencyUsec|Host|Nexus Storage Host Latency (Microseconds)|MilliSeconds|Storage array host latency in microseconds|Dimension,Host|
+PurefaHostSpaceBytes|Host|Nexus Storage Host Space Bytes|Bytes|Storage array host space in bytes|Dimension,Host|
+PurefaHostSpaceDatareductionRatio|Host|Nexus Storage Host Space Datareduction Ratio|Percent|Storage array host volumes data reduction ratio|Host|
+PurefaHostSpaceSizeBytes|Host|Nexus Storage Host Space Size Bytes|Bytes|Storage array host volumes size|Host|
+PurefaInfo|Storage Array|Nexus Storage Info (Preview)|Unspecified|Storage array system information|Array Name|
+PurefaVolumePerformanceIOPS|Volume|Nexus Storage Volume Performance IOPS|Count|Storage array volume IOPS|Dimension,Volume|
+PurefaVolumePerformanceLatencyUsec|Volume|Nexus Storage Volume Performance Latency (Microseconds)|MilliSeconds|Storage array volume latency in microseconds|Dimension,Volume|
+PurefaVolumePerformanceThroughputBytes|Volume|Nexus Storage Volume Performance Throughput Bytes|Bytes|Storage array volume throughput|Dimension,Volume|
+PurefaVolumeSpaceBytes|Volume|Nexus Storage Volume Space Bytes|Bytes|Storage array volume space in bytes|Dimension,Volume|
+PurefaVolumeSpaceDatareductionRatio|Volume|Nexus Storage Volume Space Datareduction Ratio|Percent|Storage array overall data reduction|Volume|
+PurefaVolumeSpaceSizeBytes|Volume|Nexus Storage Volume Space Size Bytes|Bytes|Storage array volumes size|Volume|
## Network Fabric Metrics ### Network Devices Metrics
operator-nexus Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/overview.md
Title: Introduction to Operator Nexus
-description: High level information about the Operator Nexus product.
+ Title: Introduction to Azure Operator Nexus
+description: Get high-level information about the Azure Operator Nexus product.
Last updated 02/26/2023
# What is Azure Operator Nexus?
-Azure Operator Nexus is a carrier-grade, next-generation hybrid cloud platform for telecommunication operators.
-Operator Nexus is purpose-built for operators' network-intensive workloads and mission-critical applications.
-Operator Nexus supports both Microsoft and a wide variety of partner virtualized and containerized network functions.
-The platform automates lifecycle management of the infrastructure, including: network fabric, bare metal hosts, and storage appliances, as well as tenant workloads for Container Network Functions and Virtualized Network Functions.
-Operator Nexus meets operators' security, resiliency, observability, and performance requirements to achieve meaningful business results.
-The platform seamlessly integrates compute, network, and storage.
-Operator Nexus is self service and uses the Azure portal, CLI, SDKs, and other tools to interact with the platform.
+Azure Operator Nexus is a carrier-grade, next-generation hybrid cloud platform for telecommunication operators. Azure Operator Nexus is purpose-built for operators' network-intensive workloads and mission-critical applications.
+Azure Operator Nexus supports a wide variety of virtualized and containerized network functions from both Microsoft and partners. The platform automates the lifecycle management (LCM) of the infrastructure, including network fabric, bare-metal hosts, and storage appliances. It also automates the LCM of tenant workloads for containerized network functions (CNFs) and virtualized network functions (VNFs).
-Figure: Operator Nexus Overview
+Azure Operator Nexus meets operators' security, resiliency, observability, and performance requirements to achieve meaningful business results. The platform seamlessly integrates compute, network, and storage.
+
+The platform is self-service. Operators use the Azure portal, the Azure CLI, SDKs, and other tools to interact with it.
+ ## Key benefits
-Operator Nexus includes the following benefits for operating secure carrier-grade network functions at scale:
+Azure Operator Nexus includes the following benefits for operating secure carrier-grade network functions at scale:
-* **Reduced operational complexity and costs** ΓÇô Operators have the ability to manage their Operator Nexus infrastructure and tenants from Azure. Automation can be built to streamline deployment, allowing for operators to have faster time to market and innovate to provide value add services to their customers.
-* **Integrated platform for compute, network, and storage** ΓÇô Operators no longer need to provision compute, network, and storage separately as Operator Nexus provides an end-to-end (E2E) platform from the infrastructure to the tenant for applications.
-For example, the networks associated to the compute infrastructure can automatically be provisioned across the compute and network infrastructure without requiring additional teams.
-* **Expanding Network Function (NF) ecosystem** ΓÇô Operator Nexus supports a wide variety of Microsoft's own NFs and partners NFs via the Operator Nexus Ready program.
-These NFs are tested for deployment and lifecycle management on Operator Nexus before they're made available in Azure Marketplace.
-* **Access to key Azure services** ΓÇô Operator Nexus being connected to Azure, operators can seamlessly access most Azure services through the same connection as the on-premises network.
-Operators can monitor logs and metrics via Azure Monitor, and analyze telemetry data using Log Analytics or Azure AI/Machine Learning framework.
-* **Unified governance and compliance** ΓÇô Operator Nexus extends Azure management and services to operator's premises.
-Operators can unify data governance and enforce security and compliance policies by [Azure Role based Access Control](../role-based-access-control/overview.md) and [Azure Policy](../governance/policy/overview.md).
+* **Reduced operational complexity and costs**: Operators can manage their Azure Operator Nexus infrastructure and tenants from Azure. They can build automation to streamline deployment, which helps them decrease time to market and innovate to provide value-add services to their customers.
+* **Integrated platform for compute, network, and storage**: Operators no longer need to provision compute, network, and storage separately. Azure Operator Nexus provides an end-to-end platform from the infrastructure to the tenant for applications. For example, the networks associated with the compute infrastructure can be provisioned automatically across the compute and network infrastructure without requiring more teams.
+* **Expanding network function (NF) ecosystem**: Azure Operator Nexus supports Microsoft and partner NFs via the Azure Operator Nexus Ready program. These NFs are tested for deployment and LCM on Azure Operator Nexus before they become available in Azure Marketplace.
+* **Access to key Azure services**: Because Azure Operator Nexus is connected to Azure, operators can access most Azure services through the same connection as the on-premises network. Operators can monitor logs and metrics via Azure Monitor. They can analyze telemetry data by using Log Analytics or the Azure Machine Learning framework.
+* **Unified governance and compliance**: Azure Operator Nexus extends Azure management and services to operators' premises. Operators can unify data governance and enforce security and compliance policies by using [Azure role-based access control](../role-based-access-control/overview.md) and [Azure Policy](../governance/policy/overview.md).
-## How Operator Nexus works
+## How Azure Operator Nexus works
-Operator Nexus utilizes a curated and certified hardware Bill of Materials (BOM). It is composed of commercially available off-the-shelf servers, network switches, and storage arrays. The infrastructure is deployed in operator's on-premises data center. Operators or System Integrators must make sure they [meet the prerequisites and follow the guidance](./howto-azure-operator-nexus-prerequisites.md).
+Azure Operator Nexus uses a curated and certified hardware bill of materials (BOM). It consists of commercially available off-the-shelf (COTS) servers, network switches, and storage arrays. The infrastructure is deployed in an operator's on-premises datacenter. Operators or system integrators must make sure that they [meet the prerequisites and follow the guidance](./howto-azure-operator-nexus-prerequisites.md).
-The service that manages the Operator Nexus infrastructure is hosted in Azure. Operators can choose an Azure region that supports Operator Nexus for any on-premises Operator Nexus instance. The diagram illustrates the architecture of the Operator Nexus service.
+The service that manages the Azure Operator Nexus infrastructure is hosted in Azure. Operators can choose an Azure region that supports Azure Operator Nexus for any on-premises instance of the service. The following diagram illustrates the architecture of the Azure Operator Nexus service.
-Figure: How Operator Nexus works
+Here are important points about the architecture:
-1. The management layer of Operator Nexus is built on Azure Resource Manager (ARM), that provides consistent user experience in the Azure portal and Azure APIs
-2. Azure Resource Providers provide modeling and lifecycle management of [Operator Nexus resources](./concepts-resource-types.md) such as bare metal machines, clusters, network devices, etc.
-3. Operator Nexus controllers: Cluster Manager and Network Fabric Controller, are deployed in a managed Virtual Network (VNet) connected to operator's on-premises network. The controllers enable functionalities such as infrastructure bootstrapping, configurations, service upgrades etc.
-4. Operator Nexus is integrated with many Azure services such as Azure Monitor, Azure Container Registry, and Azure Kubernetes Services.
-6. ExpressRoute is a network connectivity service that bridges Azure regions and operators' locations.
+* The management layer of Azure Operator Nexus is built on Azure Resource Manager to provide a consistent user experience in the Azure portal and Azure APIs.
+* Azure resource providers provide modeling and LCM of [Azure Operator Nexus resources](./concepts-resource-types.md) such as bare-metal machines, clusters, and network devices.
+* Azure Operator Nexus controllers include a cluster manager and a network fabric controller, which are deployed in a managed virtual network that's connected to an operator's on-premises network. These controllers enable functionalities such as infrastructure bootstrapping, configurations, and service upgrades.
+* Azure Operator Nexus is integrated with many Azure services, such as Azure Monitor, Azure Container Registry, and Azure Kubernetes Service (AKS).
+* Azure ExpressRoute is a network connectivity service that bridges Azure regions and operators' locations.
## Key features
-Here are some of the key features of Operator Nexus.
+Here are some key features of Azure Operator Nexus.
### CBL-Mariner
-Operator Nexus runs Microsoft's own Linux distribution "CBL-Mariner" on the bare metal hosts in the operator's facilities.
-The same Linux distribution supports Azure cloud infrastructure and edge services.
-It includes a small set of core packages by default.
-[CBL-Mariner](https://microsoft.github.io/CBL-Mariner/docs/) is a lightweight OS and consumes limited system resources and is engineered to be efficient.
-For example, it has a fast boot time with a small footprint with locked-down packages, resulting in the reduction of the threat landscape.
-On identifying a security vulnerability, Microsoft makes the latest security patches and fixes available with the goal of fast turn-around time. Running the infrastructure on Linux aligns with Network Function needs, telecommunication industry trends, and relevant open-source communications. Operator Nexus supports both virtualized network functions (VNFs) and containerized network functions (CNFs).
+Azure Operator Nexus runs Microsoft's own Linux distribution called [CBL-Mariner](https://microsoft.github.io/CBL-Mariner/docs/) on the bare-metal hosts in the operator's facilities. The same Linux distribution supports Azure cloud infrastructure and edge services. It includes a small set of core packages by default.
+
+CBL-Mariner is a lightweight operating system. It consumes limited system resources and is engineered to be efficient. For example, it has a fast startup time with a small footprint and locked-down packages to reduce the threat landscape.
+
+When Microsoft identifies a security vulnerability, it makes the latest security patches and fixes available with the goal of fast turnaround time. Running the infrastructure on Linux aligns with NF needs, telecommunication industry trends, and relevant open-source communications.
### Bare metal and cluster management
-Operator Nexus includes capabilities to manage the bare metal hosts in operators' premises.
-Operators can provision the bare metal hosts using Operator Nexus and can interact to restart, shutdown, or re-image, for example.
-One important component of the service is Cluster Manager.
-[Cluster Manager](./howto-cluster-manager.md) provides the lifecycle management of Kubernetes clusters that are made of the bare metal hosts.
+Azure Operator Nexus includes capabilities to manage the bare-metal hosts in operators' premises. Operators can provision the bare-metal hosts by using Azure Operator Nexus. They can interact to restart, shut down, or reimage, for example.
+
+One important component of the service is the [cluster manager](./howto-cluster-manager.md). It provides the LCM of Kubernetes clusters that are made of the bare-metal hosts.
-### Network Fabric Automation
+### Network fabric automation
-Operator Nexus includes [Network Fabric Automation (NFA)](./howto-configure-network-fabric-controller.md) which enables operators to build, operate and manage carrier grade network fabrics. The reliable and distributed cloud services model supports the operators' telco network functions. Operators have the ability to interact with Operator Nexus to provision the network fabric via Zero-Touch Provisioning (ZTP), and perform complex network implementations via a workflow driven, API model.
+Azure Operator Nexus includes [network fabric automation](./howto-configure-network-fabric-controller.md), which enables operators to build, operate, and manage carrier-grade network fabrics.
-### Network Packet Broker
+The reliable and distributed cloud services model supports the operators' telco network functions. Operators can interact with Azure Operator Nexus to provision the network fabric via zero-touch provisioning (ZTP). They can also perform complex network implementations via a workflow-driven API model.
-Network Packet Broker (NPB) is an integral part of the network fabric in Operator Nexus. NPB enables multiple scenarios from network performance monitoring to security intrusion detection. Operators can monitor every single packet in Operator Nexus and replicate it. They can apply packet filters dynamically and send filtered packets to multiple destinations for further processing.
+### Network packet broker
+
+The network packet broker is an integral part of the network fabric in Azure Operator Nexus. The network packet broker enables scenarios like network performance monitoring and security intrusion detection.
+
+Operators can monitor every packet in Azure Operator Nexus and replicate it. They can apply packet filters dynamically and send filtered packets to multiple destinations for more processing.
### Nexus Kubernetes
-Nexus Kubernetes is an Operator Nexus version of Azure Kubernetes Service (AKS) for on-premises use. It's optimized to automate creation of containers to run tenant network function workloads. A Nexus Kubernetes cluster is deployed on-premises and the traditional operational management activities (CRUD) are managed via Azure. See [Nexus Kubernetes](./concepts-nexus-kubernetes-cluster.md) to learn more.
+[Nexus Kubernetes](./concepts-nexus-kubernetes-cluster.md) is an Azure Operator Nexus version of AKS for on-premises use. It's optimized to automate the creation of containers to run tenant network function workloads.
-### Network functions virtualization infrastructure capabilities
+A Nexus Kubernetes cluster is deployed on-premises. Operators handle the traditional operational management activities of create, read, update, and delete (CRUD) via Azure.
-As a platform, Operator Nexus is designed for telco network functions and optimized for carrier-grade performance and resiliency. It has many built-in Network Functions Virtualization Infrastructure (NFVI) capabilities:
+### NFVI capabilities
-* Compute: NUMA aligned VMs with dedicated cores (both SMT siblings), backed by huge pages ensures consistent performance. There's no impact from other workloads running on the same hypervisor host.
-* Networking: SR-IOV & DPDK for low latency and high throughput. Highly available VFs to VMs with redundant physical paths provide links to all workloads. APIs are used to control access and trunk port consumption in both VNFs and CNFs.
-* Storage: Filesystem storage for CNFs backed by high performance storage arrays
+As a platform, Azure Operator Nexus is designed for telco network functions and optimized for carrier-grade performance and resiliency. It has built-in network functions virtualization infrastructure (NFVI) capabilities:
+
+* **Compute**: NUMA-aligned virtual machines (VMs) with dedicated cores backed by huge pages for consistent performance. The dedicated cores are simultaneous multithreading (SMT) siblings. There's no impact from other workloads that run on the same hypervisor host.
+* **Networking**: Single-root I/O virtualization (SR-IOV) and Data Plane Development Kit (DPDK) for low latency and high throughput. Highly available VFs to VMs with redundant physical paths provide links to all workloads. Operators use APIs to control access and trunk port consumption in both VNFs and CNFs.
+* **Storage**: File system storage for CNFs backed by high-performance storage arrays.
### Azure Operator Service Manager
-Azure Operator Service Manager is a service that allows Network Equipment Providers (NEP) to publish their NFs in Azure Marketplace. Operators can deploy them using familiar Azure APIs. Operator Service Manager provides a framework for NEPs and Microsoft to test and validate the basic functionality of the NFs. The validation includes lifecycle management of an NF on Operator Nexus.
+Azure Operator Service Manager is a service that allows network equipment providers (NEPs) to publish their NFs in Azure Marketplace. Operators can deploy the NFs by using familiar Azure APIs.
+
+Operator Service Manager provides a framework for NEPs and Microsoft to test and validate the basic functionality of the NFs. The validation includes lifecycle management of an NF on Azure Operator Nexus.
### Observability
-Operator Nexus automatically streams the metrics and logs from the operator's premises to Azure Monitor and Log Analytics workspace of:
+Azure Operator Nexus automatically streams the metrics and logs from the operator's premises to Azure Monitor and the Log Analytics workspace of:
-* Infrastructure (compute, network and storage)
-* Tenant Infrastructure (ex. VNF VMs).
+* Infrastructure (compute, network, and storage).
+* Tenant infrastructure (for example, VNF VMs).
-Log Analytics has a rich analytical tool-set that operators can use for troubleshooting or correlating for operational insights. And, Azure Monitor lets operators specify alerts.
+Log Analytics has rich analytical tools that operators can use for troubleshooting or correlating for operational insights. Operators can also use Azure Monitor to specify alerts.
## Next steps
-* Learn more about Operator Nexus [resource models](./concepts-resource-types.md)
-* Review [Operator Nexus deployment prerequisites and steps](./howto-azure-operator-nexus-prerequisites.md)
-* Learn [how to deploy a Nexus Kubernetes cluster](./quickstarts-kubernetes-cluster-deployment-bicep.md)
+* Learn more about Azure Operator Nexus [resource models](./concepts-resource-types.md).
+* Review [Azure Operator Nexus deployment prerequisites and steps](./howto-azure-operator-nexus-prerequisites.md).
+* Learn [how to deploy a Nexus Kubernetes cluster](./quickstarts-kubernetes-cluster-deployment-bicep.md).
operator-nexus Reference Near Edge Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-compute.md
Title: "Near Edge Compute Overview"
-description: Compute SKUs and resources available in Azure Operator Nexus Near Edge.
+ Title: Near-edge compute overview
+description: Learn about compute SKUs and resources available in near-edge Azure Operator Nexus instances.
Last updated 05/22/2023
-# Near-edge Compute
+# Near-edge compute
-Azure Operator Nexus offers a group of on-premises cloud solutions. One of the on-premises offering allows Telco operators to run the Network functions in a Near-edge environment. In near-edge environment (also known as 'instance'), the compute servers, also referred to as bare metal machines (BMMs), represents the physical machines in the rack, runs the CBL-Mariner operating system, and provides support for running high-performance workloads.
+Azure Operator Nexus offers a group of on-premises cloud solutions. One of the on-premises offerings allows telco operators to run the network functions in a near-edge environment.
-## SKUs available
+In a near-edge environment (also known as an instance), the compute servers (also known as bare-metal machines) represent the physical machines on the rack. They run the CBL-Mariner operating system and provide support for running high-performance workloads.
-The Nexus offering is today built with the following compute nodes for near-edge instances (the nodes that run the actual customer workloads).
+## Available SKUs
+
+The Azure Operator Nexus offering is built with the following compute nodes for near-edge instances (the nodes that run the actual customer workloads).
| SKU | Description | | -- | -- |
-| Dell R750 | Compute node for Near Edge |
+| Dell R750 | Compute node for near edge |
## Compute connectivity
-This diagram shows the connectivity model followed by computes in the near-edge instances:
-
+This diagram shows the connectivity model followed by computes in the near-edge instances.
-Figure: Operator Nexus Compute connectivity
## Compute configurations
-Operator Nexus supports a range of geometries and configurations. This table specifies the resources available per Compute.
+Azure Operator Nexus supports a range of geometries and configurations. This table specifies the resources available per compute.
| Property | Specification/Description | | -- | -|
-| Number of vCPUs for Tenant usage | 96 vCPUs hyper-threading enabled per compute server |
-| Number of vCPU available for workloads | 2 - 48 vCPUs with even number of vCPUs only. No cross-NUMA VMs |
-| CPU pinning | Default |
-| RAM for running tenant workload | 448 GB (224 GB per NUMA) |
-| Huge pages for Tenant workloads | All VMs are backed by 1-GB huge pages |
-| Disk (Ephemeral) per Compute | Up to 3.5 TB per compute host |
-| Data plane traffic path for workloads | SR-IOV |
-| Number of SR-IOV VFs | Max 32 vNICs (30 VFs available for tenant workloads per NUMA) |
-| SR-IOV NIC support | Enabled on all 100G NIC ports VMs with virtual functions (VF) assigned out of Mellanox supported VF link aggregation (VF LAG). The allocated VFs are from the same physical NIC and within the same NUMA boundary. NIC ports providing VF LAG are connected to two different TOR switches for redundancy. Support for Trunked VFs RSS with Hardware Queuing. Supporting multi-queue support on VMs. |
-| IPv4/IPv6 Support | Dual stack IPv4/IPv6, IPv4, and IPv6 only virtual machines |
+| Number of virtual CPUs (vCPUs) for tenant usage | 96 vCPUs, with hyperthreading enabled per compute server. |
+| Number of vCPUs available for workloads | 2 to 48 vCPUs, with an even number of vCPUs only. No cross-NUMA (nonuniform memory access) virtual machines (VMs). |
+| CPU pinning | Default. |
+| RAM for running tenant workloads | 448 GB (224 GB per NUMA). |
+| Huge pages for tenant workloads | All VMs are backed by 1-GB huge pages. |
+| Disk (ephemeral) per compute | Up to 3.5 TB per compute host. |
+| Data plane traffic path for workloads | Single-root I/O virtualization (SR-IOV). |
+| Number of SR-IOV virtual functions (VFs) | Maximum of 32 virtual NICs (vNICs), with 30 VFs available for tenant workloads per NUMA. |
+| SR-IOV NIC support | Enabled on all 100G NIC ports on VMs, with VFs assigned out of Mellanox-supported VF link aggregation (VF LAG). The allocated VFs are from the same physical NIC and within the same NUMA boundary. NIC ports that provide VF LAG are connected to two different top-of-rack (ToR) switches for redundancy. <br><br>Support includes trunked VF receive-side scaling (RSS) with hardware queuing. Support also includes multiple queues on VMs. |
+| IPv4/IPv6 support | Dual-stack IPv4/IPv6, IPv4, and IPv6-only virtual machines. |
operator-nexus Reference Near Edge Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-storage.md
Title: Azure Operator Nexus Storage Appliance Overview
-description: Storage Appliance SKUs and resources available in Azure Operator Nexus Near-edge.
+ Title: Azure Operator Nexus storage appliance overview
+description: Learn about storage appliance SKUs and resources available in near-edge Azure Operator Nexus instances.
Last updated 06/29/2023
-# Near-edge Nexus storage appliance
+# Near-edge Azure Operator Nexus storage appliance
-The architecture of Azure Operator Nexus revolves around core components such as compute servers, storage appliances, and network fabric devices. A single Storage Appliance referred to as the "Nexus Storage Appliance," is attached to each near-edge Nexus instance. These appliances play a vital role as the dedicated and persistent storage solution for the tenant workloads hosted within the Nexus instance.
+The architecture of Azure Operator Nexus revolves around core components such as compute servers, storage appliances, and network fabric devices. A single storage appliance is attached to each near-edge Azure Operator Nexus instance. These appliances play a vital role as the dedicated and persistent storage solution for the tenant workloads hosted in the Azure Operator Nexus instance.
-Within each Nexus storage appliance, multiple storage devices are grouped together to form a unified storage pool. This pool is then divided into multiple volumes, which are then presented to the compute servers and tenant workloads as persistent volumes.
+Within each Azure Operator Nexus storage appliance, multiple storage devices are grouped together to form a unified storage pool. This pool is then divided into multiple volumes, which are then presented to the compute servers and tenant workloads as persistent volumes.
-## SKUs available
+## Available SKUs
-This table lists the SKUs available for the storage appliance in Near-edge Nexus offering:
+This table lists the available SKUs for the storage appliance in the near-edge Azure Operator Nexus offering.
| SKU | Description | | -- | - |
-| Pure x70r3-91 | Storage appliance model x70r3-91 provided by PURE Storage |
+| Pure x70r3-91 | Storage appliance model x70r3-91 provided by Pure Storage |
## Storage connectivity
-This diagram shows the connectivity model followed by storage appliance in the Near Edge offering:
+This diagram shows the connectivity model followed by storage appliance in the near-edge offering.
## Storage limits
-This table lists the characteristics for the storage appliance:
+This table lists the characteristics of the storage appliance.
| Property | Specification/Description | | -- | -| | Raw storage capacity | 91 TB | | Usable capacity | 50 TB |
-| Number of maximum IO operations supported per second <br>(with 80/20 R/W ratio) | 250K+ (4K) <br>150K+ (16K) |
-| Number of IO operations supported per volume per second | 50K+ |
-| Maximum IO latency supported | 10 ms |
+| Number of maximum I/O operations supported per second <br>(with 80/20 read/write ratio) | 250K+ (4K) <br>150K+ (16K) |
+| Number of I/O operations supported per volume per second | 50K+ |
+| Maximum I/O latency supported | 10 ms |
| Nominal failover time supported | 10 s |
orbital Sar Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/sar-reference-architecture.md
Additional contributors:
## Next steps -- [Azure Maps Geospatial Services](https://microsoft.github.io/SynapseML/docs/features/geospatial_services/GeospatialServices%20-%20Overview) - [Getting geospatial insights from big data using SynapseML](https://techcommunity.microsoft.com/t5/azure-maps-blog/getting-geospatial-insides-in-big-data-using-synapseml/ba-p/3154717) - [Get started with Azure Synapse Analytics](../synapse-analytics/get-started.md) - [Explore Azure Synapse Studio](/training/modules/explore-azure-synapse-studio)
payment-hsm Create Different Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-ip-addresses.md
New-AzureRmResourceGroupDeployment -Name $deploymentName -ResourceGroupName $res
-## Validate the deployment
-
-# [Azure CLI](#tab/azure-cli)
-
-You can verify that the payment HSM was created with the Azure CLI [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. You will find the output easier to read if you format the results as a table:
-
-```azurecli-interactive
-az dedicated-hsm list -o table
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-You can verify that the payment HSM was created with the Azure PowerShell [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet.
-
-```azurepowershell-interactive
-Get-AzDedicatedHsm
-```
--
-You should see the name of your newly created payment HSM.
- ## Next steps
-Advance to the next article to learn how to access the payShield manager for your payment HSM
+Advance to the next article to learn how to view your payment HSM.
> [!div class="nextstepaction"]
-> [Access the payShield manager](access-payshield-manager.md)
+> [View your payment HSMs](view-payment-hsms.md)
+ More resources:
payment-hsm Create Different Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet-template.md
New-AzureRmResourceGroupDeployment -Name $deploymentName -ResourceGroupName $res
-## Validate the deployment
-
-# [Azure CLI](#tab/azure-cli)
-
-You can verify that the payment HSM was created with the Azure CLI [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. You will find the output easier to read if you format the results as a table:
-
-```azurecli-interactive
-az dedicated-hsm list -o table
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-You can verify that the payment HSM was created with the Azure PowerShell [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet.
-
-```azurepowershell-interactive
-Get-AzDedicatedHsm
-```
--
-You should see the name of your newly created payment HSM.
- ## Next steps
-Advance to the next article to learn how to access the payShield manager for your payment HSM
+Advance to the next article to learn how to view your payment HSM.
> [!div class="nextstepaction"]
-> [Access the payShield manager](access-payshield-manager.md)
+> [View your payment HSMs](view-payment-hsms.md)
More resources:
payment-hsm Create Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet.md
The Azure portal shows the "Private IP allocation method" as "Static":
-## View your payment HSM
-
-# [Azure CLI](#tab/azure-cli)
-
-To see your payment HSM and its properties, use the Azure CLI [az dedicated-hsm show](/cli/azure/dedicated-hsm#az-dedicated-hsm-show) command.
-
-```azurecli-interactive
-az dedicated-hsm show --resource-group "myResourceGroup" --name "myPaymentHSM"
-```
-
-To list all of your payment HSMs, use the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. (The output of this command is more readable when displayed in table-format.)
-
-```azurecli-interactive
-az dedicated-hsm list --resource-group "myResourceGroup" -o table
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-To see your payment HSM and its properties, use the Azure PowerShell [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet.
-
-```azurepowershell-interactive
-Get-AzDedicatedHsm -Name "myPaymentHSM" -ResourceGroup "myResourceGroup"
-```
-
-To list all of your payment HSMs, use the [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet with no parameters.
-
-To get more information on your payment HSM, you can use the [Get-AzResource](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet, specifying the resource group, and "Microsoft.HardwareSecurityModules/dedicatedHSMs" as the resource type:
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName "myResourceGroup" -ResourceType "Microsoft.HardwareSecurityModules/dedicatedHSMs"
-```
--- ## Next steps
-Advance to the next article to learn how to access the payShield manager for your payment HSM
+Advance to the next article to learn how to view your payment HSM.
> [!div class="nextstepaction"]
-> [Access the payShield manager](access-payshield-manager.md)
+> [View your payment HSMs](view-payment-hsms.md)
Additional information:
payment-hsm Create Payment Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-payment-hsm.md
The Azure portal shows the "Private IP allocation method" as "Dynamic":
-## View your payment HSM
-
-# [Azure CLI](#tab/azure-cli)
-
-To see your payment HSM and its properties, use the Azure CLI [az dedicated-hsm show](/cli/azure/dedicated-hsm#az-dedicated-hsm-show) command.
-
-```azurecli-interactive
-az dedicated-hsm show --resource-group "myResourceGroup" --name "myPaymentHSM"
-```
-
-To list all of your payment HSMs, use the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. (The output of this command is more readable when displayed in table-format.)
-
-```azurecli-interactive
-az dedicated-hsm list --resource-group "myResourceGroup" -o table
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-To see your payment HSM and its properties, use the Azure PowerShell [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet.
-
-```azurepowershell-interactive
-Get-AzDedicatedHsm -Name "myPaymentHSM" -ResourceGroup "myResourceGroup"
-```
-
-To list all of your payment HSMs, use the [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet with no parameters.
-
-To get more information on your payment HSM, you can use the [Get-AzResource](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet, specifying the resource group, and "Microsoft.HardwareSecurityModules/dedicatedHSMs" as the resource type:
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName "myResourceGroup" -ResourceType "Microsoft.HardwareSecurityModules/dedicatedHSMs"
-```
--- ## Next steps
-Advance to the next article to learn how to access the payShield manager for your payment HSM
+Advance to the next article to learn how to view your payment HSM.
> [!div class="nextstepaction"]
-> [Access the payShield manager](access-payshield-manager.md)
+> [View your payment HSMs](view-payment-hsms.md)
Additional information:
payment-hsm View Payment Hsms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/view-payment-hsms.md
+
+ Title: View your Azure Payment HSMs
+description: View your Azure Payment HSMs
++++++ Last updated : 08/09/2023++
+# Tutorial: View your payment HSMs
+
+After you have [created one or more Azure Payment HSMs](create-payment-hsm.md), you can view them (and validate their deployment) with Azure CLI, Azure PowerShell, or the Azure portal.
+
+## View your payment HSM
+
+# [Azure CLI](#tab/azure-cli)
+
+To list all of your payment HSMs, use the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. (The output of this command is more readable when displayed in table-format.)
+
+```azurecli-interactive
+az dedicated-hsm list --resource-group "myResourceGroup" -o table
+```
+
+To see a specific payment HSM and its properties, use the Azure CLI [az dedicated-hsm show](/cli/azure/dedicated-hsm#az-dedicated-hsm-show) command.
+
+```azurecli-interactive
+az dedicated-hsm show --resource-group "myResourceGroup" --name "myPaymentHSM"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+To list all of your payment HSMs, use the [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet with no parameters.
+
+To get more information on your payment HSMs, you can use the [Get-AzResource](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet, specifying the resource group, and "Microsoft.HardwareSecurityModules/dedicatedHSMs" as the resource type:
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName "myResourceGroup" -ResourceType "Microsoft.HardwareSecurityModules/dedicatedHSMs"
+```
+
+To view a specific payment HSM and its properties, use the Azure PowerShell [Get-AzDedicatedHsm](/powershell/module/az.dedicatedhsm/get-azdedicatedhsm) cmdlet.
+
+```azurepowershell-interactive
+Get-AzDedicatedHsm -Name "myPaymentHSM" -ResourceGroup "myResourceGroup"
+```
+
+# [Azure portal](#tab/azure-portal)
++
+To view your payment HSMs in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Select "Resource groups".
+1. Select your resource group (e.g., "myResourceGroup").
+1. You will see your network interfaces, but not your payment HSMs. Select the "Show hidden types" box.
+ :::image type="content" source="./media/portal-view-payment-hsms.png" lightbox="./media/portal-view-payment-hsms.png" alt-text="Screenshot of the Azure portal displaying all payment HSMs.":::
+1. You can select one of your payment HSMs to see its properties.
+ :::image type="content" source="./media/portal-view-payment-hsm.png" lightbox="./media/portal-view-payment-hsm.png" alt-text="Screenshot of the Azure portal displaying a specific payment HSM and its properties.":::
+++
+## Next steps
+
+Advance to the next article to learn how to access the payShield manager for your payment HSM
+> [!div class="nextstepaction"]
+> [Access the payShield manager](access-payshield-manager.md)
+
+Additional information:
+
+- Read an [Overview of Payment HSM](overview.md)
+- Find out how to [get started with Azure Payment HSM](getting-started.md)
+- See some common [deployment scenarios](deployment-scenarios.md)
+- Learn about [Certification and compliance](certification-compliance.md)
+- Read the [frequently asked questions](faq.yml)
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
Title: Compute and Storage Options - Azure Database for PostgreSQL - Flexible Server
+ Title: Compute and storage options in Azure Database for PostgreSQL - Flexible Server
description: This article describes the compute and storage options in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
-# Compute and Storage options in Azure Database for PostgreSQL - Flexible Server
+# Compute and storage options in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can create an Azure Database for PostgreSQL server in one of three different pricing tiers: Burstable, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the PostgreSQL server level. A server can have one or many databases.
+You can create an Azure Database for PostgreSQL server in one of three pricing tiers: Burstable, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that you can provision, the amount of memory per vCore, and the storage technology that's used to store the data. All resources are provisioned at the PostgreSQL server level. A server can have one or many databases.
-| Resource / Tier | **Burstable** | **General Purpose** | **Memory Optimized** |
+| Resource/Tier | Burstable | General Purpose | Memory Optimized |
|:|:-|:--|:|
-| VM series | B-series | Ddsv4-series, <br> Dsv3-series | Edsv4-series, <br> Esv3 series |
-| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 20(v4), 32, 48, 64 |
-| Memory per vCore | Variable | 4 GB | 6.75 to 8 GB |
+| VM-series | B-series | Ddsv4-series, <br> Dsv3-series | Edsv4-series, <br> Esv3-series |
+| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 20 (v4), 32, 48, 64 |
+| Memory per vCore | Variable | 4 GB | 6.75 GB to 8 GB |
| Storage size | 32 GB to 32 TB | 32 GB to 32 TB | 32 GB to 32 TB | | Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days |
-To choose a pricing tier, use the following table as a starting point.
+To choose a pricing tier, use the following table as a starting point:
| Pricing tier | Target workloads | |:-|:--|
-| Burstable | Best for workloads that donΓÇÖt need the full CPU continuously. |
+| Burstable | Workloads that don't need the full CPU continuously. |
| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.| | Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
-After you create a server, the compute tier, number of vCores can be changed up or down and storage size can be changed up within seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scale resources](#scale-resources) section.
+After you create a server for the compute tier, you can change the number of vCores (up or down) and the storage size (up) in seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scaling resources](#scaling-resources) section.
## Compute tiers, vCores, and server types
-Compute resources can be selected based on the tier, vCores and memory size. vCores represent the logical CPU of the underlying hardware.
+You can select compute resources based on the tier, vCores, and memory size. vCores represent the logical CPU of the underlying hardware.
The detailed specifications of the available server types are as follows:
-| SKU Name | vCores |Memory Size | Max Supported IOPS | Max Supported I/O bandwidth |
+| SKU name | vCores |Memory size | Maximum supported IOPS | Maximum supported I/O bandwidth |
|-|--|-|- |--| | **Burstable** | | | | | | B1ms | 1 | 2 GiB | 640 | 10 MiB/sec |
-| B2s | 2 | 4 GiB | 1280 | 15 MiB/sec |
-| B2ms | 2 | 4 GiB | 1700 | 22.5 MiB/sec |
-| B4ms | 4 | 8 GiB | 2400 | 35 MiB/sec |
-| B8ms | 8 | 16 GiB | 3100 | 50 MiB/sec |
-| B12ms | 12 | 24 GiB | 3800 | 50 MiB/sec |
-| B16ms | 16 | 32 GiB | 4300 | 50 MiB/sec |
-| B20ms | 20 | 40 GiB | 5000 | 50 MiB/sec |
+| B2s | 2 | 4 GiB | 1,280 | 15 MiB/sec |
+| B2ms | 2 | 4 GiB | 1,700 | 22.5 MiB/sec |
+| B4ms | 4 | 8 GiB | 2,400 | 35 MiB/sec |
+| B8ms | 8 | 16 GiB | 3,100 | 50 MiB/sec |
+| B12ms | 12 | 24 GiB | 3,800 | 50 MiB/sec |
+| B16ms | 16 | 32 GiB | 4,300 | 50 MiB/sec |
+| B20ms | 20 | 40 GiB | 5,000 | 50 MiB/sec |
| **General Purpose** | | | | |
-| D2s_v3 / D2ds_v4 / D2ds_v5 | 2 | 8 GiB | 3200 | 48 MiB/sec |
-| D4s_v3 / D4ds_v4 / D4ds_v5 | 4 | 16 GiB | 6400 | 96 MiB/sec |
-| D8s_v3 / D8ds_v4 / D8ds_v5 | 8 | 32 GiB | 12800 | 192 MiB/sec |
-| D16s_v3 / D16ds_v4 / D16ds_v5 | 16 | 64 GiB | 20000 | 384 MiB/sec |
-| D32s_v3 / D32ds_v4 / D32ds_v5 | 32 | 128 GiB | 20000 | 768 MiB/sec |
-| D48s_v3 / D48ds_v4 / D48ds_v5 | 48 | 192 GiB | 20000 | 900 MiB/sec |
-| D64s_v3 / D64ds_v4 / D64ds_v5 | 64 | 256 GiB | 20000 | 900 MiB/sec |
-| D96ds_v5 | 96 | 384 GiB | 20000 | 900 MiB/sec |
+| D2s_v3 / D2ds_v4 / D2ds_v5 | 2 | 8 GiB | 3,200 | 48 MiB/sec |
+| D4s_v3 / D4ds_v4 / D4ds_v5 | 4 | 16 GiB | 6,400 | 96 MiB/sec |
+| D8s_v3 / D8ds_v4 / D8ds_v5 | 8 | 32 GiB | 12,800 | 192 MiB/sec |
+| D16s_v3 / D16ds_v4 / D16ds_v5 | 16 | 64 GiB | 20,000 | 384 MiB/sec |
+| D32s_v3 / D32ds_v4 / D32ds_v5 | 32 | 128 GiB | 20,000 | 768 MiB/sec |
+| D48s_v3 / D48ds_v4 / D48ds_v5 | 48 | 192 GiB | 20,000 | 900 MiB/sec |
+| D64s_v3 / D64ds_v4 / D64ds_v5 | 64 | 256 GiB | 20,000 | 900 MiB/sec |
+| D96ds_v5 | 96 | 384 GiB | 20,000 | 900 MiB/sec |
| **Memory Optimized** | | | | |
-| E2s_v3 / E2ds_v4 / E2ds_v5 | 2 | 16 GiB | 3200 | 48 MiB/sec |
-| E4s_v3 / E4ds_v4 / E4ds_v5 | 4 | 32 GiB | 6400 | 96 MiB/sec |
-| E8s_v3 / E8ds_v4 / E8ds_v5 | 8 | 64 GiB | 12800 | 192 MiB/sec |
-| E16s_v3 / E16ds_v4 / E16ds_v5 | 16 | 128 GiB | 20000 | 384 MiB/sec |
-| E20ds_v4 / E20ds_v5 | 20 | 160 GiB | 20000 | 480 MiB/sec |
-| E32s_v3 / E32ds_v4 / E32ds_v5 | 32 | 256 GiB | 20000 | 768 MiB/sec |
-| E48s_v3 / E48ds_v4 / E48ds_v5 | 48 | 384 GiB | 20000 | 900 MiB/sec |
-| E64s_v3 / E64ds_v4 | 64 | 432 GiB | 20000 | 900 MiB/sec |
-| E64ds_v5 | 64 | 512 GiB | 20000 | 900 MiB/sec |
-| E96ds_v5 | 96 | 672 GiB | 20000 | 900 MiB/sec |
+| E2s_v3 / E2ds_v4 / E2ds_v5 | 2 | 16 GiB | 3,200 | 48 MiB/sec |
+| E4s_v3 / E4ds_v4 / E4ds_v5 | 4 | 32 GiB | 6,400 | 96 MiB/sec |
+| E8s_v3 / E8ds_v4 / E8ds_v5 | 8 | 64 GiB | 12,800 | 192 MiB/sec |
+| E16s_v3 / E16ds_v4 / E16ds_v5 | 16 | 128 GiB | 20,000 | 384 MiB/sec |
+| E20ds_v4 / E20ds_v5 | 20 | 160 GiB | 20,000 | 480 MiB/sec |
+| E32s_v3 / E32ds_v4 / E32ds_v5 | 32 | 256 GiB | 20,000 | 768 MiB/sec |
+| E48s_v3 / E48ds_v4 / E48ds_v5 | 48 | 384 GiB | 20,000 | 900 MiB/sec |
+| E64s_v3 / E64ds_v4 | 64 | 432 GiB | 20,000 | 900 MiB/sec |
+| E64ds_v5 | 64 | 512 GiB | 20,000 | 900 MiB/sec |
+| E96ds_v5 | 96 | 672 GiB | 20,000 | 900 MiB/sec |
## Storage
-The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.
+The storage that you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and PostgreSQL server logs. The total amount of storage that you provision also defines the I/O capacity available to your server.
Storage is available in the following fixed sizes: | Disk size | IOPS | |:|:|
-| 32 GiB | Provisioned 120, Up to 3,500 |
-| 64 GiB | Provisioned 240, Up to 3,500 |
-| 128 GiB | Provisioned 500, Up to 3,500 |
-| 256 GiB | Provisioned 1100, Up to 3,500 |
-| 512 GiB | Provisioned 2300, Up to 3,500 |
+| 32 GiB | Provisioned 120; up to 3,500 |
+| 64 GiB | Provisioned 240; up to 3,500 |
+| 128 GiB | Provisioned 500; up to 3,500 |
+| 256 GiB | Provisioned 1,100; up to 3,500 |
+| 512 GiB | Provisioned 2,300; up to 3,500 |
| 1 TiB | 5,000 | | 2 TiB | 7,500 | | 4 TiB | 7,500 |
Storage is available in the following fixed sizes:
| 16 TiB | 18,000 | | 32 TiB | 20,000 |
-Note that IOPS are also constrained by your VM type. Even though you can select any storage size independent of the server type, you may not be able to use all IOPS that the storage provides, especially when you choose a server with a small number of vCores.
+Your VM type also constrains IOPS. Even though you can select any storage size independently from the server type, you might not be able to use all IOPS that the storage provides, especially when you choose a server with a small number of vCores.
-You can add additional storage capacity during and after the creation of the server.
+You can add storage capacity during and after the creation of the server.
->[!NOTE]
+> [!NOTE]
> Storage can only be scaled up, not down.
-You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and IO percent](concepts-monitoring.md).
+You can monitor your I/O consumption in the Azure portal or by using Azure CLI commands. The relevant metrics to monitor are [storage limit, storage percentage, storage used, and I/O percentage](concepts-monitoring.md).
### Maximum IOPS for your configuration
-|SKU Name |Storage Size in GiB |32 |64 |128 |256 |512 |1,024|2,048|4,096|8,192 |16,384|32767 |
+|SKU name |Storage size in GiB |32 |64 |128 |256 |512 |1,024|2,048|4,096|8,192 |16,384|32,767 |
|||||-|-|--|--|--|--|||
-| |Maximum IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+| |Maximum IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
|**Burstable** | | | | | | | | | | | | | |B1ms |640 IOPS |120|240|500 |640*|640* |640* |640* |640* |640* |640* |640* |
-|B2s |1280 IOPS |120|240|500 |1100|1280*|1280*|1280*|1280*|1280* |1280* |1280* |
-|B2ms |1280 IOPS |120|240|500 |1100|1700*|1700*|1700*|1700*|1700* |1700* |1700* |
-|B4ms |1280 IOPS |120|240|500 |1100|2300 |2400*|2400*|2400*|2400* |2400* |2400* |
-|B8ms |1280 IOPS |120|240|500 |1100|2300 |3100*|3100*|3100*|3100* |2400* |2400* |
-|B12ms |1280 IOPS |120|240|500 |1100|2300 |3800*|3800*|3800*|3800* |3800* |3800* |
-|B16ms |1280 IOPS |120|240|500 |1100|2300 |4300*|4300*|4300*|4300* |4300* |4300* |
-|B20ms |1280 IOPS |120|240|500 |1100|2300 |5000 |5000*|5000*|5000* |5000* |5000* |
+|B2s |1,280 IOPS |120|240|500 |1,100|1,280*|1,280*|1,280*|1,280*|1,280* |1,280* |1,280* |
+|B2ms |1,280 IOPS |120|240|500 |1,100|1,700*|1,700*|1,700*|1,700*|1,700* |1,700* |1,700* |
+|B4ms |1,280 IOPS |120|240|500 |1,100|2,300 |2,400*|2,400*|2,400*|2,400* |2,400* |2,400* |
+|B8ms |1,280 IOPS |120|240|500 |1,100|2,300 |3,100*|3,100*|3,100*|3,100* |2,400* |2,400* |
+|B12ms |1,280 IOPS |120|240|500 |1,100|2,300 |3,800*|3,800*|3,800*|3,800* |3,800* |3,800* |
+|B16ms |1,280 IOPS |120|240|500 |1,100|2,300 |4,300*|4,300*|4,300*|4,300* |4,300* |4,300* |
+|B20ms |1,280 IOPS |120|240|500 |1,100|2,300 |5,000 |5,000*|5,000*|5,000* |5,000* |5,000* |
|**General Purpose** | | | | | | | | | | | |
-|D2s_v3 / D2ds_v4 |3200 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |3200* |
-|D2ds_v5 |3750 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |3200* |
-|D4s_v3 / D4ds_v4 / D4ds_v5 |6,400 IOPS |120|240|500 |1100|2300 |5000 |6400*|6400*|6400* |6400* |6400* |
-|D8s_v3 / D8ds_v4 / D8ds_v5 |12,800 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |12800*|12800*|12800*|
-|D16s_v3 / D16ds_v4 / D16ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|D32s_v3 / D32ds_v4 / D32ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|D48s_v3 / D48ds_v4 / D48ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|D64s_v3 / D64ds_v4 / D64ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|D96ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|D2s_v3 / D2ds_v4 |3,200 IOPS |120|240|500 |1,100|2,300 |3,200*|3,200*|3,200*|3,200* |3,200* |3,200* |
+|D2ds_v5 |3,750 IOPS |120|240|500 |1,100|2,300 |3,200*|3,200*|3,200*|3,200* |3,200* |3,200* |
+|D4s_v3 / D4ds_v4 / D4ds_v5 |6,400 IOPS |120|240|500 |1,100|2,300 |5,000 |6,400*|6,400*|6,400* |6,400* |6,400* |
+|D8s_v3 / D8ds_v4 / D8ds_v5 |12,800 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |12,800*|12,800*|12,800*|
+|D16s_v3 / D16ds_v4 / D16ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|D32s_v3 / D32ds_v4 / D32ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|D48s_v3 / D48ds_v4 / D48ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|D64s_v3 / D64ds_v4 / D64ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|D96ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
|**Memory Optimized** | | | | | | | | | | | | |
-|E2s_v3 / E2ds_v4 |3200 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |3200* |
-|E2ds_v5 |3750 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |3200* |
-|E4s_v3 / E4ds_v4 / E4ds_v5 |6,400 IOPS |120|240|500 |1100|2300 |5000 |6400*|6400*|6400* |6400* |6400* |
-|E8s_v3 / E8ds_v4 / E8ds_v5 |12,800 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |12800*|12800*|12800*|
-|E16s_v3 / E16ds_v4 / E16ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|E20ds_v4/E20ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|E32s_v3 / E32ds_v4 / E32ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|E48s_v3 / E48ds_v4 / E48ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|E64s_v3 / E64ds_v4 / E64ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-|E96ds_v5 |20,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
-
-When marked with a \*, IOPS are limited by the VM type you selected. Otherwise IOPS are limited by the selected storage size.
-
->[!NOTE]
-> You may see higher IOPS in the metrics due to disk level bursting. Please see the [documentation](../../virtual-machines/disk-bursting.md#disk-level-bursting) for more details.
+|E2s_v3 / E2ds_v4 |3,200 IOPS |120|240|500 |1,100|2,300 |3,200*|3,200*|3,200*|3,200* |3,200* |3,200* |
+|E2ds_v5 |3,750 IOPS |120|240|500 |1,100|2,300 |3,200*|3,200*|3,200*|3,200* |3,200* |3,200* |
+|E4s_v3 / E4ds_v4 / E4ds_v5 |6,400 IOPS |120|240|500 |1,100|2,300 |5,000 |6,400*|6,400*|6,400* |6,400* |6,400* |
+|E8s_v3 / E8ds_v4 / E8ds_v5 |12,800 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |12,800*|12,800*|12,800*|
+|E16s_v3 / E16ds_v4 / E16ds_v5 |20,000 IOPS |120|240|500 |1100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|E20ds_v4/E20ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|E32s_v3 / E32ds_v4 / E32ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|E48s_v3 / E48ds_v4 / E48ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|E64s_v3 / E64ds_v4 / E64ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+|E96ds_v5 |20,000 IOPS |120|240|500 |1,100|2,300 |5,000 |7,500 |7,500 |16,000 |18,000 |20,000 |
+
+IOPS marked with an asterisk (\*) are limited by the VM type that you selected. Otherwise, the selected storage size limits the IOPS.
+
+> [!NOTE]
+> You might see higher IOPS in the metrics because of disk-level bursting. For more information, see [Managed disk bursting](../../virtual-machines/disk-bursting.md#disk-level-bursting).
### Maximum I/O bandwidth (MiB/sec) for your configuration
-|SKU Name |Storage Size, GiB |32 |64 |128 |256 |512 |1,024 |2,048 |4,096 |8,192 |16,384|32,767|
+|SKU name |Storage size in GiB |32 |64 |128 |256 |512 |1,024 |2,048 |4,096 |8,192 |16,384|32,767|
||-| | |- |- |-- |-- |-- |-- |||
-| |**Storage Bandwidth, MiB/sec** |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
+| |**Storage bandwidth in MiB/sec** |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
|**Burstable** | | | | | | | | | | | | | |B1ms |10 MiB/sec |10* |10* |10* |10* |10* |10* |10* |10* |10* |10* |10* | |B2s |15 MiB/sec |15* |15* |15* |15* |15* |15* |15* |15* |15* |10* |10* |
When marked with a \*, IOPS are limited by the VM type you selected. Otherwise I
|E8s_v3 / E8ds_v4 |192 MiB/sec |25 |50 |100 |125 |150 |192* |192* |192* |192* |192* |192* | |E8ds_v5 |290 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |290* |290* |290* | |E16s_v3 / E16ds_v4 |384 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |384* |384* |384* |
-|E16ds_v5 |600 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |600* |600* |
+|E16ds_v5 |600 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |600* |600* |
|E20ds_v4 |480 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |480* |480* |480* |
-|E20ds_v5 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |750* |
+|E20ds_v5 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |750* |
|E32s_v3 / E32ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |750 |
-|E32ds_v5 |865 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |865* |
+|E32ds_v5 |865 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |865* |
|E48s_v3 / E48ds_v4 /E48ds_v5 |900 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 | |E64s_v3 / E64ds_v4 /E64ds_v5 |900 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 | |Ed96ds_v5 |900 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
-When marked with a \*, I/O bandwidth is limited by the VM type you selected. Otherwise, I/O bandwidth is limited by the selected storage size.
+I/O bandwidth marked with an asterisk (\*) is limited by the VM type that you selected. Otherwise, the selected storage size limits the I/O bandwidth.
### Reaching the storage limit
-When you reach the storage limit, the server will start returning errors and prevent any further modifications. This may also cause problems with other operational activities, such as backups and WAL archival.
-
-To avoid this situation, when the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to **read-only mode**.
-
-We recommend to actively monitor the disk space that is in use, and increase the disk size ahead of any out of storage situation. You can set up an alert to notify you when your server storage is approaching out of disk so you can avoid any issues with running out of disk. For more information, see the documentation on [how to set up an alert](howto-alert-on-metrics.md).
+When you reach the storage limit, the server starts returning errors and prevents any further modifications. Reaching the limit might also cause problems with other operational activities, such as backups and write-ahead log (WAL) archiving.
+To avoid this situation, the server is automatically switched to read-only mode when the storage usage reaches 95 percent or when the available capacity is less than 5 GiB.
-### Storage auto-grow (Preview)
+We recommend that you actively monitor the disk space that's in use and increase the disk size before you run out of storage. You can set up an alert to notify you when your server storage is approaching an out-of-disk state. For more information, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](howto-alert-on-metrics.md).
+### Storage auto-grow (preview)
-> [!NOTE]
-> Storage auto-grow is currently in preview.
+Storage auto-grow can help ensure that your server always has enough storage capacity and doesn't become read-only. When you turn on storage auto-grow, the storage will automatically expand without affecting the workload. This feature is currently in preview.
-Enabling storage auto-grow ensures that your server always has sufficient storage capacity and avoids the possibility of it becoming read-only. When storage auto-grow is activated, the storage will automatically expand without affecting the workload. For servers with less than 1 TiB of provisioned storage, the auto-grow feature activates when storage consumption reaches 80%. For servers with 1 TB or more of storage, auto-grow activates at 90% consumption.
+For servers that have less than 1 TiB of provisioned storage, the auto-grow feature activates when storage consumption reaches 80 percent. For servers that have 1 TB or more of storage, auto-grow activates at 90 percent consumption.
-For example, if you have allocated 256 GiB of storage and enabled storage auto-grow, once the actual utilization reaches 80% (205 GB), the server's storage size will automatically increase to the next available premium disk tier, which is 512 GiB. However, if the disk size is 1 TiB or larger, the scaling threshold is set at 90%. In such cases, the scaling process begins when the utilization reaches 922 GiB, and the disk is resized to 2 TiB.
+For example, assume that you allocate 256 GiB of storage and turn on storage auto-grow. When the utilization reaches 80 percent (205 GB), the server's storage size automatically increases to the next available premium disk tier, which is 512 GiB. But if the disk size is 1 TiB or larger, the scaling threshold is set at 90 percent. In such cases, the scaling process begins when the utilization reaches 922 GiB, and the disk is resized to 2 TiB.
-Azure Database for PostgreSQL Flexible Server utilizes Azure Managed Disk v1, and the default behavior is to increase the disk size to the next premium tier. This increase is always double in both size and cost, regardless of whether the storage scaling operation is initiated manually or through storage auto-grow. Enabling storage auto-grow proves particularly valuable when managing unpredictable workloads since it automatically detects low storage conditions and scales up the storage accordingly.
+Azure Database for PostgreSQL - Flexible Server uses [Azure managed disks](/azure/virtual-machines/disks-types). The default behavior is to increase the disk size to the next premium tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage auto-grow. Enabling storage auto-grow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly.
-The process of scaling storage is performed online, without causing any downtime, except when the disk is provisioned at 4096 GiB, which is a limitation of underlying Azure managed disk V1. If a disk is already 4096 GiB, the storage scaling activity will not be triggered, even if storage auto-grow is enabled. In such cases, you need to manually scale your storage, which is an offline operation that should be planned according to your business requirements.
+The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure managed disks. If a disk is already 4,096 GiB, the storage scaling activity won't be triggered, even if storage auto-grow is turned on. In such cases, you need to manually scale your storage. Manual scaling is an offline operation that you should plan according to your business requirements.
Remember that storage can only be scaled up, not down.
-## Limitations
+## Limitations
-1. Disk scaling operations are always online except in specific scenarios involving the 4096 GiB boundary. These scenarios include reaching, starting at, or crossing the 4096 GiB limit, such as when scaling from 2048 GiB to 8192 GiB etc. This limitation is due to the underlying Azure Managed disk V1 which needs a manual disk scaling operation. You will receive an informational message in the portal when you approach this limit.
+- Disk scaling operations are always online, except in specific scenarios that involve the 4,096-GiB boundary. These scenarios include reaching, starting at, or crossing the 4,096-GiB limit. An example is when you're scaling from 2,048 GiB to 8,192 GiB.
-2. Storage auto-grow currently does not work for HA / Read replica-enabled servers; we will support this very soon.
+ This limitation is due to the underlying Azure managed disk, which needs a manual disk scaling operation. You receive an informational message in the portal when you approach this limit.
-3. Storage Autogrow does not trigger when there is high WAL usage.
+- Storage auto-grow currently doesn't work for high-availability or read-replica-enabled servers.
-> [!NOTE]
-> Storage auto-grow never triggers offline increase.
+- Storage auto-grow isn't triggered when you have high WAL usage.
+> [!NOTE]
+> Storage auto-grow never triggers an offline increase.
## Backup
-The service automatically takes backups of your server. You can select a retention period from a range of 7 to 35 days. Learn more about backups in the [concepts article](concepts-backup-restore.md).
+The service automatically takes backups of your server. You can select a retention period from a range of 7 to 35 days. To learn more about backups, see [Backup and restore in Azure Database for PostgreSQL - Flexible Server](concepts-backup-restore.md).
-## Scale resources
+## Scaling resources
-After you create your server, you can independently change the vCores, the compute tier, the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. The resources can be scaled through the portal or Azure CLI.
+After you create your server, you can independently change the vCores, the compute tier, the amount of storage, and the backup retention period. You can scale the number of vCores up or down. You can scale the backup retention period up or down from 7 to 35 days. The storage size can only be increased. You can scale the resources through the Azure portal or the Azure CLI.
> [!NOTE]
-> The storage size can only be increased. You cannot go back to a smaller storage size after the increase.
+> After you increase the storage size, you can't go back to a smaller storage size.
-When you change the number of vCores or the compute tier, the server is restarted for the new server type to take effect. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. The time it takes to restart your server depends on crash recovery process and database activity at the time of restart. Restart typically takes one minute or less, however can be higher and can take several minutes depending upon transactional activity at time of restart. Scaling the storage works the same way, and requires restart.
+When you change the number of vCores or the compute tier, the server is restarted for the new server type to take effect. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back.
-To improve the restart time, we recommend to perform scale operations during non-peak hours, that will reduce the time needed to restart the database server.
+The time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restarting typically takes one minute or less. But it can be higher and can take several minutes, depending on transactional activity at the time of the restart. Scaling the storage works the same way and requires a restart.
+
+To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
Changing the backup retention period is an online operation. ## Pricing
-For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for PostgreSQL** to customize the options.
+For the most up-to-date pricing information, see the [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) page. The [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab, based on the options that you select.
+
+If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and then select **Azure Database for PostgreSQL** to customize the options.
## Next steps
postgresql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-cli.md
ms.devlang: azurecli Previously updated : 11/30/2021 Last updated : 8/14/2023
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-portal.md
Previously updated : 11/30/2021 Last updated : 8/14/2023 # Configure server parameters in Azure Database for PostgreSQL - Flexible Server via the Azure portal
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 8/2/2023 Last updated : 8/14/2023 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 8/2/2023
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Flexible Server - PostgreSQL
+## Release: August 2023
+* Support for [minor versions](./concepts-supported-versions.md) 15.3, 14.8, 13.11, 12.15, 11.20 <sup>$</sup>
## Release: July 2023 * General Availability of PostgreSQL 15 for Azure Database for PostgreSQL ΓÇô Flexible Server.
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
The structure of the JSON is:
{ "properties": { "sourceDbServerResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",-
-"sourceServerUserName": "<username>@<servername>",
-"targetServerUserName": "<username>",
"secretParameters": { "adminCredentials": { "sourceServerPassword": "<password>", "targetServerPassword": "<password>" }
+ "sourceServerUserName": "<username>@<servername>",
+ "targetServerUserName": "<username>"
}, "dbsToMigrate":
The `create` parameters that go into the json file format are as shown below:
| Parameter | Type | Description | | - | - | - | | `sourceDbServerResourceId` | Required | This parameter is the resource ID of the Single Server source and is mandatory. |
+| `adminCredentials` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target. These passwords help to authenticate against the source and target servers.
| `sourceServerUserName` | Required | The default value is the admin user created during the creation of single server and the password provided will be used for authentication against this user. In case you are not using the default user, this parameter is the user or role on the source server used for performing the migration. This user should have necessary privileges and ownership on the database objects involved in the migration and should be a member of **azure_pg_admin** role. | | `targetServerUserName` | Required | The default value is the admin user created during the creation of flexible server and the password provided will be used for authentication against this user. In case you are not using the default user, this parameter is the user or role on the target server used for performing the migration. This user should be a member of **azure_pg_admin**, **pg_read_all_settings**, **pg_read_all_stats**,**pg_stat_scan_tables** roles and should have the **Create role, Create DB** attributes. |
-| `secretParameters` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target. These passwords help to authenticate against the source and target servers.
| `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. | | `overwriteDbsInTarget` | Required | When set to true (default), if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. | | `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. |
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-high-availability.md
Last updated 08/3/2022
[!INCLUDE [azure-database-for-postgresql-single-server-deprecation](../includes/azure-database-for-postgresql-single-server-deprecation.md)]
-The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) [uptime](https://azure.microsoft.com/support/legal/sla/postgresql) uptime. Azure Database for PostgreSQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
+The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) for [uptime](https://azure.microsoft.com/support/legal/sla/postgresql). Azure Database for PostgreSQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application downtime when using this service.
-Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+Azure Database for PostgreSQL is suitable for running mission-critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
## Components in Azure Database for PostgreSQL ΓÇô Single Server | **Component** | **Description**| | | -- |
-| <b>PostgreSQL Database Server | Azure Database for PostgreSQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in seconds. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (WAL) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://www.postgresql.org/docs/11/sql-checkpoint.html) process, data pages from the database server memory are also flushed to the storage. |
-| <b>Remote Storage | All PostgreSQL physical data files and WAL files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within few seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
-| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
+| **PostgreSQL Database Server** | Azure Database for PostgreSQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in seconds. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write-ahead logs (WAL) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://www.postgresql.org/docs/11/sql-checkpoint.html) process, data pages from the database server memory are also flushed to the storage. |
+| **Remote Storage** | All PostgreSQL physical data files and WAL files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within few seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
+| **Gateway** | The Gateway acts as a database proxy, routes all client connections to the database server. |
## Planned downtime mitigation Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
-1. Scale up and down PostgreSQL database servers in seconds
-2. Gateway that acts as a proxy to route client connects to the proper database server
+1. Scale up and down PostgreSQL database servers in seconds.
+2. Gateway that acts as a proxy to route client connects to the proper database server.
3. Scaling up of storage can be performed without any downtime. Remote storage enables fast detach/re-attach after the failover. Here are some planned maintenance scenarios: | **Scenario** | **Description**| | | -- |
-| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
-| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
-| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
+| **Compute scale up/down** | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it's shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
+| **Scaling up storage** | Scaling up the storage is an online operation and does not interrupt the database server.|
+| **New software deployment (Azure)** | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
+| **Minor version upgrades** | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
## Unplanned downtime mitigation Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention. 1. Azure PostgreSQL servers with fast-scaling capabilities.
-2. Gateway that acts as a proxy to route client connections to the proper database server
+2. Gateway that acts as a proxy to route client connections to the proper database server.
3. Azure storage with three copies for reliability, availability, and redundancy. 4. Remote storage also enables fast detach/re-attach after the server failover.
-### Unplanned downtime: failure scenarios and service recovery
+### Unplanned downtime: Failure scenarios and service recovery
Here are some failure scenarios and how Azure Database for PostgreSQL automatically recovers: | **Scenario** | **Automatic recovery** | | - | - |
-| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
-<B>Compute failure | Compute failures are rare event. In the event of compute failure a new compute container is provisioned and the storage with data files is mapped to it, PostgreSQL database engine is then brought online on the new container and gateway service ensures transparent failover without any need of application changes.Please also note that compute layer has built in Availability Zone resiliency and a new compute is spin up in different Availability zone in the event of AZ compute failure.
+| **Database server failure** | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
+| **Storage failure** | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
+|**Compute failure** | Compute failures are rare events. In the event of compute failure a new compute container is provisioned and the storage with data files is mapped to it, PostgreSQL database engine is then brought online on the new container and gateway service ensures transparent failover without any need of application changes. Please also note that compute layer has built in Availability Zone resiliency and a new compute is spun up in different Availability zone in the event of AZ compute failure.
Here are some failure scenarios that require user action to recover: | **Scenario** | **Recovery plan** | | - | - |
-| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](./how-to-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
-| <b> Availability zone failure | Failure of a Availability zone is also a rare event. However, if you need protection from a Availability zone failure, you can configure one or more read replicas or consider using our [Flexible Server](../flexible-server/concepts-high-availability.md) offering which provides zone redundant high availability.
-| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. |
+| **Region failure** | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](./how-to-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
+| **Availability zone failure** | Failure of an Availability zone is also a rare event. However, if you need protection from an Availability zone failure, you can configure one or more read replicas or consider using our [Flexible Server](../flexible-server/concepts-high-availability.md) offering which provides zone-redundant high availability.
+| **Logical/user errors** | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. |
## Summary
-Azure Database for PostgreSQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for PostgreSQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/postgresql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
+Azure Database for PostgreSQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for PostgreSQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/postgresql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
## Next steps
postgresql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-cli.md
After Azure Database for PostgreSQL Single server is encrypted with a customer's
### Once the server is restored, revalidate data encryption the restored server
-* Assign identity for the replica server
+* Assign identity for the replica server
```azurecli-interactive az postgres server update --name <server name> -g <resource_group> --assign-identity ```
-* Get the existing key that has to be used for the restored/replica server
+* Get the existing key that has to be used for the restored/replica server
```azurecli-interactive az postgres server key list --name '<server_name>' -g '<resource_group_name>' ```
-* Set the policy for the new identity for the restored/replica server
+* Set the policy for the new identity for the restored/replica server
```azurecli-interactive az keyvault set-policy --name <keyvault> -g <resource_group> --key-permissions get unwrapKey wrapKey --object-id <principl id of the server returned by the step 1>
az postgres server key delete -g <resource_group> --kid <key url>
Apart from Azure portal, you can also enable data encryption on your Azure Database for PostgreSQL single server using Azure Resource Manager templates for new and existing server.
-### For a new server
-
-Use one of the pre-created Azure Resource Manager templates to provision the server with data encryption enabled:
-[Example with Data encryption](https://github.com/Azure/azure-postgresql/tree/master/arm-templates/ExampleWithDataEncryption)
-
-This Azure Resource Manager template creates an Azure Database for PostgreSQL Single server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server.
- ### For an existing server Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for PostgreSQL Single servers.
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
Azure Database for PostgreSQL ΓÇô Single Server generally became available in 2018. Given customer feedback and new advancements in the computation, availability, scalability, and performance capabilities in the Azure database landscape, the Single Server offering needs to be retired and upgraded with a new architecture. Azure Database for PostgreSQL - Flexible Server is the next generation of the service and brings you the best of Azure open-source database platform.
-As part of this retirement, we no longer support creating new single server instances from the Azure portal beginning November 30, 2023. If you need to create single server instances to meet business continuity needs, you can continue to use Azure CLI.
+As part of this retirement, we no longer support creating new single server instances from the Azure portal beginning November 30, 2023. If you need to create single server instances to meet business continuity needs, you can continue to use Azure CLI and the ARM template. However, as of March 2025, these methods will no longer be used.
If you currently have an Azure Database for PostgreSQL - Single Server service hosting production servers, we're glad to inform you that you can migrate your Azure Database for PostgreSQL - Single Server to the Azure Database for PostgreSQL - Flexible Server.
private-5g-core Monitor Private 5G Core Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-workbooks.md
+
+ Title: Monitor Azure Private 5G Core with Azure Monitor Workbooks
+description: Information on using Azure Monitor Workbooks to monitor activity and analyze statistics in your private mobile network.
++++ Last updated : 08/09/2023+++
+# Monitor Azure Private 5G Core with Azure Monitor Workbooks
+
+Azure Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure and combine them into unified interactive experiences.
+
+Azure Workbooks allow you to view status information, metrics, and alerts for all of your [Azure private multi-access compute (MEC)](/azure/private-multi-access-edge-compute-mec/overview) resources in one place. Workbooks are supported for the **Mobile Network Site** resource, providing a monitoring solution for all resources in a site.
+
+Within your **Mobile Network Site** resource in the Azure portal, you can view workbook templates that report essential information about the resources connected to your site. Templates are curated reports designed for flexible reuse by multiple users and teams. When you open a template, a transient workbook is created and populated with the content specified in the template. You can modify a template to create your own workbooks, but the original template will remain in the gallery for others to use.
+
+## The workbook gallery
++
+The gallery lists your saved workbooks and templates. To access the gallery:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to the **Mobile Network Site** resource for the site you want to monitor.
+1. Select **Workbooks** from the left-hand navigation.
+
+Your AP5GC deployment includes a **PMEC Site Overview** template along with the default **Activity Logs Insights** template. You can also select the **Empty** quick start template to create your own workbook.
+
+- **PMEC Site Overview** ΓÇô view information about resources connected to your Mobile Network Site like subscribers, packet cores and Azure Stack Edge devices.
+- **Activity Logs Insights** ΓÇô view information about management changes on resources within your Mobile Network Site.
+- **Empty** ΓÇô start with an empty workbook and choose the information you want to display.
+
+## Using the PMEC Site Overview template
+
+This template uses data from Azure Monitor platform metrics and alerts, Azure Resource Graph, and Azure Resource Health. It has four tabs providing real-time status information on resources connected to the Mobile Network Site.
+
+### Overview tab
+
+The Overview tab provides a comprehensive view of the Mobile Network Site. With this centralized dashboard, you can view the status of connected resources such as packet cores, Kubernetes clusters and data networks. You can also view graphs of key performance indicators, such as registered subscribers and user plane throughput, and a list of alerts.
+
+### Subscriber Provisioning Information tab
+
+The Subscriber Provisioning Information tab provides information on SIMs connected to the Mobile Network Site, filtered by SIM group. Select a SIM group to view the number of connected SIMs, their provisioning status, and associated SIM policy details.
+
+### Packet Core Control Plane Procedures tab
+
+The Packet Core Control Plane Procedures tab provides monitoring graphs for key procedures on the packet core such as registrations and session establishments.
+
+### Azure Stack Edge Status tab
+
+The Azure Stack Edge Status tab shows the status, resource usage and alerts for Azure Stack Edge (ASE) devices connected to the Mobile Network Site.
+
+## Using the Activity Log Insights template
+
+The Activity Logs Insights template provides a set of dashboards that monitor the changes to resources under your Mobile Network Site. The dashboards also present data about which users or services performed activities in the subscription and the activity status. See [Activity log insights](/azure/azure-monitor/essentials/activity-log-insights) for more information.
+
+## Next steps
+
+- [Azure Workbooks overview](/azure/azure-monitor/visualize/workbooks-overview)
+- [Get started with Azure Workbooks](/azure/azure-monitor/visualize/workbooks-getting-started)
+- [Create a workbook with a template](/azure/azure-monitor/visualize/workbooks-templates)
private-5g-core Support Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/support-lifetime.md
# Support lifetime
-Packet core versions are supported until two subsequent versions have been released (unless otherwise noted). This is typically two months after the release date. You should plan to upgrade your packet core in this time frame to avoid losing support.
+Only the two most recent packet core versions are supported at any time (unless otherwise noted). Each packet core version is typically supported for two months from the date of its release. You should plan to upgrade your packet core in this time frame to avoid losing support.
+
+### Currently Supported Packet Core Versions
+The following table shows the support status for different Packet Core releases.
+
+| Release | Support Status |
+||-|
+| AP5GC 2307 | Supported until AP5GC 2309 released |
+| AP5GC 2306 | Supported until AP5GC 2308 released |
+| AP5GC 2305 and earlier | Out of Support |
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
Previously updated : 07/31/2023 Last updated : 08/10/2023 # What's new in Azure Private 5G Core?
ARM API users can migrate to the 2023-06-01 API with their current resources wit
Note: ARM API users who have done a PUT using the 2023-06-01 API and have enabled configuration only accessible in the up-level API cannot go back to using the 2022-11-01 API for PUTs. If they do, then the up-level config will be deleted.
+### New cloud monitoring option - Azure Monitor Workbooks
+
+**Type:** New feature
+
+**Date available:** July 12, 2023
+
+You can now use Azure Monitor Workbooks to monitor your private mobile network. Workbooks provide versatile tools for visualizing and analyzing data. You can use workbooks to gain insights into your connected resources - including the packet core, Azure Stack Edge devices and Kubernetes clusters - using a range of visualization options. You can create new workbooks or customize one of the included templates to suit your needs.
+
+See [Monitor Azure Private 5G Core with Azure Monitor Workbooks](monitor-private-5g-core-workbooks.md) to learn more.
+ ## June 2023 ### Packet core 2306
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net | | Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) | redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net | | Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.com | purview.azure.com |
-| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com |
+| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
| Azure Digital Twins (Microsoft.DigitalTwins) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net | | Azure Arc (Microsoft.HybridCompute) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> kubernetesconfiguration.azure.com |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Key Vault HSM (hardware security module) | Microsoft.Keyvault/managedHSMs | HSM | | Azure Key Vault | Microsoft.KeyVault/vaults | vault | | Azure Machine Learning | Microsoft.MachineLearningServices/workspaces | amlworkspace |
+| Azure Machine Learning | Microsoft.MachineLearningServices/registries | amlregistry |
| Azure Migrate | Microsoft.Migrate/assessmentProjects | project | | Application Gateway | Microsoft.Network/applicationgateways | application gateway | | Private Link service (your own service) | Microsoft.Network/privateLinkServices | empty |
The following information lists the known limitations to the use of private endp
| | | | Effective routes and security rules unavailable for private endpoint network interface. | Effective routes and security rules won't be displayed for the private endpoint NIC in the Azure portal. | | NSG flow logs unsupported. | NSG flow logs unavailable for inbound traffic destined for a private endpoint. |
-| No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG thatΓÇÖs coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. |
+| No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG that's coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. |
| Destination port ranges supported up to a factor of 250 K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> One source * one destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1 K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125 K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250 K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. | | Source port filtering is interpreted as * | Source port filtering isn't actively used as valid scenario of traffic filtering for traffic destined to a private endpoint. | | Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
reliability Asm Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/asm-retirement.md
To help with this transition, we are providing a range of resources and tools, i
Below is a list of classic resources being retired, their retirement dates, and a link to migration to ARM guidance :
-| Classic resource | Retirement date | Migration documentation |
-||||
-|[VM (classic)](https://azure.microsoft.com/updates/classicvmretirment) | Sep 23 | [Migrate VM (classic) to ARM](/azure/virtual-machines/classic-vm-deprecation?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Azure Active Directory Domain Services](/azure/active-directory-domain-services/migrate-from-classic-vnet?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Mar 23 | [Migrate Azure Active Directory Domain Services to ARM](/azure/active-directory-domain-services/migrate-from-classic-vnet?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Azure Batch Cloud Service Pools](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024) | Feb 24 |[Migrate Azure Batch Cloud Service Pools to ARM](/azure/batch/batch-pool-cloud-service-to-virtual-machine-configuration?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Cloud Services (classic)](https://azure.microsoft.com/updates/cloud-services-retirement-announcement) | Aug 24 |[Migrate Cloud Services (classic) to ARM](/azure/cloud-services-extended-support/in-place-migration-overview?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[App Service Environment v1/v2](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement) | Aug 24 |[Migrate App Service Environment v1/v2 to ARM](/azure/app-service/environment/migrate?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
-|[API Management](/azure/api-management/breaking-changes/stv1-platform-retirement-august-2024?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Aug 24 |[Migrate API Management to ARM](/azure/api-management/compute-infrastructure?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-do-i-migrate-to-the-stv2-platform)
-|[Azure Redis Cache](/azure/azure-cache-for-redis/cache-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#caches-with-a-dependency-on-cloud-services-(classic)) | Aug 24 |[Migrate Azure Redis Cache to ARM](/azure/azure-cache-for-redis/cache-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#caches-with-a-dependency-on-cloud-services--classic)
-|[Classic Resource Providers](https://azure.microsoft.com/updates/azure-classic-resource-providers-will-be-retired-on-31-august-2024/) | Aug 24 |[Migrate Classic Resource Providers to ARM](/azure/azure-resource-manager/management/deployment-models?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
-|[Integration Services Environment](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) | Aug 24 |[Migrate Integration Services Environment to ARM](/azure/logic-apps/export-from-ise-to-standard-logic-app?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
-|[Microsoft HPC Pack](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |Aug 24| [Migrate Microsoft HPC Pack to ARM](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide)|
-|[Virtual WAN](/azure/virtual-wan/virtual-wan-faq#update-router?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Aug 24 | [Migrate Virtual WAN to ARM](/azure/virtual-wan/virtual-wan-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#update-router) |
-|[Classic Storage](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Storage to ARM](/azure/storage/common/classic-account-migration-overview?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Classic Virtual Network](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Virtual Network to ARM]( /azure/virtual-network/migrate-classic-vnet-powershell?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Classic Application Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Application Gateway to ARM](/azure/application-gateway/classic-to-resource-manager?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
-|[Classic Reserved IP addresses](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24| [Migrate Classic Reserved IP addresses to ARM](/azure/virtual-network/ip-services/public-ip-upgrade-classic?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Classic ExpressRoute Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24 | [Migrate Classic ExpressRoute Gateway to ARM](/azure/expressroute/expressroute-migration-classic-resource-manager?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-|[Classic VPN gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic VPN gateway to ARM]( /azure/vpn-gateway/vpn-gateway-classic-resource-manager-migration?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+| Classic resource | Retirement date | Migration documentation | Support |
+|||||
+|[VM (classic)](https://azure.microsoft.com/updates/classicvmretirment) | Sep 23 | [Migrate VM (classic) to ARM](/azure/virtual-machines/classic-vm-deprecation?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Linux](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22cddd3eb5-1830-b494-44fd-782f691479dc%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22e2542607-20ad-4425-e30d-eec8e2121f55%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D), [Windows](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%226f16735c-b0ae-b275-ad3a-03479cfa1396%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%228a82f77d-c3ab-7b08-d915-776b4ff64ff4%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D), [RedHat](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22de8937fc-74cc-daa7-2639-e1fe433dcb87%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22b4991d30-6ff3-56aa-c832-0aa9f9e8f0c1%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D), [Ubuntu](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22240f5f1e-00c5-452d-6886-13429eddd6cf%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%229b8be6a3-1dca-0ca9-93bb-d259139a5cd5%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D), [SUSE](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%224a15f982-bfba-8ef2-a417-5fa383940392%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%2201d83b71-bc02-e38d-facd-43ce9df6da28%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Azure Active Directory Domain Services](/azure/active-directory-domain-services/migrate-from-classic-vnet?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Mar 23 | [Migrate Azure Active Directory Domain Services to ARM](/azure/active-directory-domain-services/migrate-from-classic-vnet?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [AAD Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22a69d6bc1-d1db-61e6-2668-451ae3784f86%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22b437f1a6-38fe-550d-9b87-85c69d33faa7%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Azure Batch Cloud Service Pools](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024) | Feb 24 |[Migrate Azure Batch Cloud Service Pools to ARM](/azure/batch/batch-pool-cloud-service-to-virtual-machine-configuration?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |
+|[Cloud Services (classic)](https://azure.microsoft.com/updates/cloud-services-retirement-announcement) | Aug 24 |[Migrate Cloud Services (classic) to ARM](/azure/cloud-services-extended-support/in-place-migration-overview?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Cloud Services Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22e79dcabe-5f77-3326-2112-74487e1e5f78%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22fca528d2-48bd-7c9f-5806-ce5d5b1d226f%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[App Service Environment v1/v2](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement) | Aug 24 |[Migrate App Service Environment v1/v2 to ARM](/azure/app-service/environment/migrate?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | [App Service Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%222fd37acf-7616-eae7-546b-1a78a16d11b5%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22cfaf122c-93a9-a462-8b68-40ca78b60f32%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[API Management](/azure/api-management/breaking-changes/stv1-platform-retirement-august-2024?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Aug 24 |[Migrate API Management to ARM](/azure/api-management/compute-infrastructure?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-do-i-migrate-to-the-stv2-platform) |[API Management Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22b4d0e877-0166-0474-9a76-b5be30ba40e4%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%2217bd9098-5a17-03a0-fb7c-4d076261e407%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Azure Redis Cache](/azure/azure-cache-for-redis/cache-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#caches-with-a-dependency-on-cloud-services-(classic)) | Aug 24 |[Migrate Azure Redis Cache to ARM](/azure/azure-cache-for-redis/cache-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#caches-with-a-dependency-on-cloud-services--classic) | [Redis Cache Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22275635f1-6a9b-cca1-af9e-c379b30890ff%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%221b2a8dc2-790c-fedd-2e57-a608bd352c06%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Classic Resource Providers](https://azure.microsoft.com/updates/azure-classic-resource-providers-will-be-retired-on-31-august-2024/) | Aug 24 |[Migrate Classic Resource Providers to ARM](/azure/azure-resource-manager/management/deployment-models?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |
+|[Integration Services Environment](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) | Aug 24 |[Migrate Integration Services Environment to ARM](/azure/logic-apps/export-from-ise-to-standard-logic-app?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | [ISE Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%2265e73690-23aa-be68-83be-a6b9bd188345%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%224401dcbe-4183-6d63-7b0c-313ce7c4a496%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D)|
+|[Microsoft HPC Pack](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |Aug 24| [Migrate Microsoft HPC Pack to ARM](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide)|[HPC Pack Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22e00b1ed8-fc24-fef4-6f4c-36d963708ae1%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22b0d0a49b-0eff-12cd-a955-7e9d6cd809d4%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Virtual WAN](/azure/virtual-wan/virtual-wan-faq#update-router?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Aug 24 | [Migrate Virtual WAN to ARM](/azure/virtual-wan/virtual-wan-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#update-router) |[Virtual WAN Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22d3b69052-33aa-55e7-6d30-ebb7040f9766%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%229fce0565-284f-2521-c1ac-6c80f954b323%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Classic Storage](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Storage to ARM](/azure/storage/common/classic-account-migration-overview?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Classic Storage](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%226a9c20ed-85c7-c289-d5e2-560da8f2a7c8%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%2212adcfc2-182a-874a-066e-dda77370890a%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Classic Virtual Network](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Virtual Network to ARM]( /azure/virtual-network/migrate-classic-vnet-powershell?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Virtual network Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22b25271d3-6431-dfbc-5f12-5693326809b3%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%227b487f07-f200-85b5-f3e1-0a2d40b71fef%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D)|
+|[Classic Application Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Application Gateway to ARM](/azure/application-gateway/classic-to-resource-manager?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |[Application Gateway Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22101732bb-31af-ee61-7c16-d4ad77c86a50%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%228b2086bf-19da-8ab5-41dc-ad9eadc6e9b3%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D)|
+|[Classic Reserved IP addresses](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24| [Migrate Classic Reserved IP addresses to ARM](/azure/virtual-network/ip-services/public-ip-upgrade-classic?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Reserved IP Address Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22b25271d3-6431-dfbc-5f12-5693326809b3%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%22910d0c2f-6a50-f8cc-af5e-64bd648e3678%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Classic ExpressRoute Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24 | [Migrate Classic ExpressRoute Gateway to ARM](/azure/expressroute/expressroute-migration-classic-resource-manager?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[ExpressRoute Gateway Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22pesId%22%3A+%22759b4975-eee7-178d-6996-31047d078bf2%22%2C%09%0D%0A%09%22supportTopicId%22%3A+%2291ebdc1e-a04a-89df-f81d-d6209e40ff49%22%2C%0D%0A%09%22contextInfo%22%3A+%22RDFE+Migration+to+ARM%22%2C%0D%0A%09%22severity%22%3A+%224%22%0D%0A+%7D) |
+|[Classic VPN gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic VPN gateway to ARM]( /azure/vpn-gateway/vpn-gateway-classic-resource-manager-migration?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |
## Support We understand that you may have questions or concerns about this change, and we are here to help. If you have any questions or require further information, please do not hesitate to reach out to our [customer support team](https://azure.microsoft.com/support)
reliability Migrate Search Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-search-service.md
If you created your search service in a region that supports availability zones
1. Add at [least two replicas to your new search service](../search/search-capacity-planning.md#add-or-reduce-replicas-and-partitions). Once the search service has at least two replicas, it automatically takes advantage of availability zone support. 1. Migrate your data from your old search service to your new search service by rebuilding of all your search indexes from your old service.
-To rebuild all of your search indexes, choose one of the following two options:
- - [Move individual indexes from your old search service to your new one](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/index-backup-restore)
+To rebuild all of your search indexes:
- Rebuild indexes from an external data source if one is available. 1. Redirect traffic from your old search service to your new search service. This may require updates to your application that uses the old search service. >[!TIP]
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Previously updated : 07/26/2023 Last updated : 07/26/2023 - # Reliability in Azure App Service This article describes reliability support in [Azure App Service](../app-service/overview.md), and covers intra-regional resiliency with [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
To explore how Azure App Service can bolster the resiliency of your application
### High availability #### :::image type="icon" source="media/icon-recommendation-high.svg"::: **ASP-1 - Deploy zone-redundant App Service plans**
-as zone-redundant. Follow the steps to [redeploy to availability zone support](#create-a-resource-with-availability-zone-enabled), configure your pipelines to redeploy your WebApp on the new App Services Plan, and then use a [Blue-Green deployment](/azure/spring-apps/concepts-blue-green-deployment-strategies) approach to failover to the new site.
+To enhance the resiliency and reliability of your business-critical workloads, it's recommended that you deploy your new App Service Plans with zone-redundancy. Follow the steps to [redeploy to availability zone support](#create-a-resource-with-availability-zone-enabled), configure your pipelines to redeploy your WebApp on the new App Services Plan, and then use a [Blue-Green deployment](/azure/spring-apps/concepts-blue-green-deployment-strategies) approach to failover to the new site.
By distributing your applications across multiple availability zones, you can ensure their continued operation even in the event of a datacenter-level failure. For more information on availability zone support in Azure App Service, see [Availability zone support](#availability-zone-support).
az appservice plan create --resource-group MyResourceGroup --name MyPlan --sku P
# [Azure portal](#tab/portal) + To create an App Service with availability zones using the Azure portal, enable the zone redundancy option during the "Create Web App" or "Create App Service Plan" experiences. :::image type="content" source="../app-service/media/how-to-zone-redundancy/zone-redundancy-portal.png" alt-text="Screenshot of zone redundancy enablement using the portal.":::
The capacity/number of workers/instance count can be changed once the App Servic
:::image type="content" source="../app-service/media/how-to-zone-redundancy/capacity-portal.png" alt-text="Screenshot of a capacity update using the portal."::: + # [Azure Resource Manager (ARM)](#tab/arm) + The only changes needed in an Azure Resource Manager template to specify an App Service with availability zones are the ***zoneRedundant*** property (required) and optionally the App Service plan instance count (***capacity***) on the [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true*** and ***capacity*** should be set based on the same conditions described previously. The Azure Resource Manager template snippet below shows the new ***zoneRedundant*** property and ***capacity*** specification.
You cannot migrate existing App Service instances or environment resources from
There's no additional cost associated with enabling availability zones. Pricing for a zone redundant App Service is the same as a single zone App Service. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances. For pricing information for App Service Environment v3, see [Pricing](../app-service/environment/overview.md#pricing). + ## Next steps > [!div class="nextstepaction"] > [Reliability in Azure](/azure/availability-zones/overview) ++
reliability Reliability Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-batch.md
This article describes reliability support in Azure Batch and covers both intra-
## Availability zone support + [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] Batch maintains parity with Azure on supporting availability zones.
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
This article describes reliability support in Azure Container Instances (ACI) an
[!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] + Azure Container Instances supports *zonal* container group deployments, meaning the instance is pinned to a specific, self-selected availability zone. The availability zone is specified at the container group level. Containers within a container group can't have unique availability zones. To change your container group's availability zone, you must delete the container group and create another container group with the new availability zone.
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Availability zone support for Azure Functions is available on both Premium (Elas
## Availability zone support + [!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)] Azure Functions supports both [zone-redundant and zonal instances](availability-zones-service-support.md#azure-services-with-availability-zone-support).
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
| **Products** | | |
-| [Azure Cosmos DB](../cosmos-db/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+|[Azure Cosmos DB](../cosmos-db/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Database for PostgreSQL - Flexible Server](reliability-postgre-flexible.md)|
[Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
| **Products** | | |
-| [Azure API Management](../api-management/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+|[Azure API Management](../api-management/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure App Configuration](../azure-app-configuration/faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-does-app-configuration-ensure-high-data-availability)| [Azure App Service](./reliability-app-service.md)| [Azure Application Gateway (V2)](../application-gateway/application-gateway-autoscaling-zone-redundant.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Postgre Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgre-flexible.md
+
+ Title: Reliability and high availability in Azure Database for PostgreSQL - Flexible Server
+description: Find out about reliability and high availability in Azure Database for PostgreSQL - Flexible Server
+++++ Last updated : 08/04/2023++
+<!--#Customer intent: I want to understand reliability support in Azure Database for PostgreSQL - Flexible Server so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
+
+# High availability (Reliability) in Azure Database for PostgreSQL - Flexible Server
+++
+This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+Azure Database for PostgreSQL: Flexible Server offers high availability support by provisioning physically separate primary and standby replica either within the same availability zone (zonal) or across availability zones (zone-redundant). This high availability model is designed to ensure that committed data is never lost in the case of failures. The model is also designed so that the database doesn't become a single point of failure in your software architecture. For more information on high availability and availability zone support, see [Availability zone support](#availability-zone-support).
++
+## Availability zone support
++
+Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md#azure-services-with-availability-zone-support) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events.
+
+- **Zone-redundant**. Zone redundant high availability deploys a standby replica in a different zone with automatic failover capability. Zone redundancy provides the highest level of availability, but requires you to configure application redundancy across zones. For that reason, choose zone redundancy when you want protection from availability zone level failures and when latency across the availability zones is acceptable.
+
+ You can choose the region and the availability zones for both primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with a similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs, a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, automatically storing **three** data copies. A zone-redundant configuration provides physical isolation of the entire stack between primary and standby servers.
+
+ :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Pictures illustrating redundant high availability architecture.":::
+
+- **Zonal**. Choose a zonal deployment when you want to achieve the highest level of availability within a single availability zone, but with the lowest network latency. You can choose the region and the availability zone to deploy both your primary database server. A standby replica server is *automatically* provisioned and managed in the *same* availability zone - with similar compute, storage, and network configuration - as the primary server. A zonal configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica.
+
+ :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Pictures illustrating zonal high availability architecture.":::
+
+
+>[!NOTE]
+>Both zonal and zone-redundant deployment models architecturally behave the same. Various discussions in the following sections apply to both unless called out otherwise.
+
+### Prerequisites
+
+**Zone redundancy:**
+
+- The **zone-redundancy** option is only available in a [regions that support availability zones](../postgresql/flexible-server/overview.md#azure-regions).
+
+- Zone-redundancy zones are **not** supported for:
+
+ - Azure Database for PostgreSQL ΓÇô Single Server SKU.
+ - Burstable compute tier.
+ - Regions with single-zone availability.
+
+**Zonal:**
+
+- The **zonal** deployment option is available in all [Azure regions](../postgresql/flexible-server/overview.md#azure-regions) where you can deploy Flexible Server.
++
+### High availability features
+
+* A standby replica is deployed in the same VM configuration - including vCores, storage, network settings - as the primary server.
+
+* You can add availability zone support for an existing database server.
+
+* You can remove the standby replica by disabling high availability.
+
+* You can choose availability zones for your primary and standby database servers for zone-redundant availability.
+
+* Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time.
+
+* In zone-redundant and zonal models, automatic backups are performed periodically from the primary database server. At the same time, the transaction logs are continuously archived in the backup storage from the standby replica. If the region supports availability zones, backup data is stored on zone-redundant storage (ZRS). In regions that don't support availability zones, backup data is stored on local redundant storage (LRS).
+
+* Clients always connect to the end hostname of the primary database server.
+
+* Any changes to the server parameters are also applied to the standby replica.
+
+* Ability to restart the server to pick up any static server parameter changes.
+
+* Periodic maintenance activities such as minor version upgrades happen at the standby first and the service failed to reduce downtime.
+
+### High availability limitations
+
+* Due to synchronous replication to the standby server, especially with a zone-redundant configuration, applications can experience elevated write and commit latency.
+
+* Standby replica cannot be used for read queries.
+
+* Depending on the workload and activity on the primary server, the failover process might take longer than 120 seconds due to the recovery involved at the standby replica before it can be promoted.
+
+* The standby server typically recovers WAL files at 40 MB/s. If your workload exceeds this limit, you may encounter extended time for the recovery to complete either during the failover or after establishing a new standby.
+
+* Configuring for availability zones induces some latency to writes and commitsΓÇöno impact on reading queries. The performance impact varies depending on your workload. As a general guideline, writes and commit impact can be around 20-30% impact.
+
+* Restarting the primary database server also restarts the standby replica.
+
+* Configuring extra standbys is not supported.
+
+* Configuring customer-initiated management tasks cannot be scheduled during the managed maintenance window.
+
+* Planned events such as scale computing and scale storage happens on the standby first and then on the primary server. Currently, the server doesn't failover for these planned operations.
+
+* If logical decoding or logical replication is configured with an availability-configured Flexible Server, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server.
+
+* Configuring availability zones between private (VNET) and public access isn't supported. You must configure availability zones within a VNET (spanned across availability zones within a region) or public access.
+
+* Availability zones are configured only within a single region. Availability zones cannot be configured across regions.
+
+### SLA
+
+- **Zone-Redundancy** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql).
+
+- **Zonal** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql).
+
+### Create an Azure Database for PostgreSQL - Flexible Server with availability zone enabled
+
+To learn how to create an Azure Database for PostgreSQL - Flexible Server for high availability with availability zones, see [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal).
+
+### Availability zone redeployment and migration
+
+To learn how to enable or disable high availability configuration in your flexible server in both zone-redundant and zonal deployment models see [Manage high availability in Flexible Server](../postgresql/flexible-server/how-to-manage-high-availability-portal.md).
++
+### High availability components and workflow
+
+#### Transaction completion
+
+Application transaction-triggered writes and commits are first logged to the WAL on the primary server. It is then streamed to the standby server using the Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged for write completion. Only then and the application confirmed the writes. An extra round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process does not wait for the logs to be applied to the standby server. The standby server is permanently in recovery mode until it is promoted.
+
+#### Health check
+
+Flexible server health monitoring periodically checks for both the primary and standby health. If, after multiple pings, health monitoring detects that a primary server isn't reachable, the service then initiates an automatic failover to the standby server. The health monitoring algorithm is based on multiple data points to avoid false positive situations.
+
+#### Failover modes
+
+Flexible server supports two failover modes, [**Planned failover**](#planned-failover) and [**Unplanned failover**](#unplanned-failover). In both modes, once the replication is severed, the standby server runs the recovery before being promoted as a primary and opens for read/write. With automatic DNS entries updated with the new primary server endpoint, applications can connect to the server using the same endpoint. A new standby server is established in the background, so that your application can maintain connectivity.
++
+#### High availability status
+
+The health of primary and standby servers are continuously monitored, and appropriate actions are taken to remediate issues, including triggering a failover to the standby server. The table below lists the possible high availability statuses:
+
+| **Status** | **Description** |
+| - | |
+| **Initializing** | In the process of creating a new standby server. |
+| **Replicating Data** | After the standby is created, it is catching up with the primary. |
+| **Healthy** | Replication is in steady state and healthy. |
+| **Failing Over** | The database server is in the process of failing over to the standby. |
+| **Removing Standby** | In the process of deleting standby server. |
+| **Not Enabled** | Zone redundant high availability is not enabled. |
+
+
+>[!NOTE]
+> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during the post-create stage, operating when the primary server activity is low is recommended.
+
+#### Steady-state operations
+
+PostgreSQL client applications are connected to the primary server using the DB server name. Application reads are served directly from the primary server. At the same time, commits and writes are confirmed to the application only after the log data is persisted on both the primary server and the standby replica. Due to this extra round-trip, applications can expect elevated latency for writes and commits. You can monitor the health of the high availability on the portal.
++
+1. Clients connect to the flexible server and perform write operations.
+2. Changes are replicated to the standby site.
+3. Primary receives an acknowledgment.
+4. Writes/commits are acknowledged.
+
+#### Point-in-time restore of high availability servers
+
+For flexible servers configured with high availability, log data is replicated in real-time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates, are replicated to the standby replica. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform a point-in-time restore from the backup. Using a flexible server's point-in-time restore capability, you can restore to the time before the error occurred. A new database server is restored as a single-zone flexible server with a new user-provided server name for databases configured with high availability. You can use the restored server for a few use cases:
+
+- You can use the restored server for production and optionally enable zone-redundant high availability.
+
+- If you want to restore an object, export it from the restored database server and import it to your production database server.
+- If you want to clone your database server for testing and development purposes or to restore for any other purposes, you can perform the point-in-time restore.
+
+To learn how to do a point-in-time restore of a flexible server, see [Point-in-time restore of a flexible server](/azure/postgresql/flexible-server/how-to-restore-server-portal).
+
+### Failover Support
+
+#### Planned failover
+
+Planned downtime events include Azure scheduled periodic software updates and minor version upgrades.You can also use a planned failover to return the primary server to a preferred availability zone. When configured in high availability, these operations are first applied to the standby replica while the applications continue to access the primary server. Once the standby replica is updated, primary server connections are drained, and a failover is triggered, which activates the standby replica to be the primary with the same database server name. Client applications have to reconnect with the same database server name to the new primary server and can resume their operations. A new standby server is established in the same zone as the old primary.
+
+For other user-initiated operations such as scale-compute or scale-storage, the changes are applied on the standby first, followed by the primary. Currently, the service is not failed over to the standby, and hence while the scale operation is carried out on the primary server, applications will encounter a short downtime.
+
+You can also use this feature to failover to the standby server with reduced downtime. For example, your primary could be on a different availability zone after an unplanned failover than the application. You want to bring the primary server back to the previous zone to colocate with your application.
+
+When executing this feature, the standby server is first prepared to ensure it is caught up with recent transactions, allowing the application to continue performing reads/writes. The standby is then promoted, and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover.
+
+| **Step** | **Description** | **App downtime expected?** |
+ | - | | -- |
+ | 1 | Wait for the standby server to have caught-up with the primary. | No |
+ | 2 | Internal monitoring system initiates the failover workflow. | No |
+ | 3 | Application writes are blocked when the standby server is close to the primary log sequence number (LSN). | Yes |
+ | 4 | Standby server is promoted to be an independent server. | Yes |
+ | 5 | DNS record is updated with the new standby server's IP address. | Yes |
+ | 6 | Application to reconnect and resume its read/write with new primary | No |
+ | 7 | A new standby server in another zone is established. | No |
+ | 8 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
+ | 9 | A steady state between the primary and the standby server is established. | No |
+ | 10 | Planned failover process is complete. | No |
+
+Application downtime starts at step #3 and can resume operation post step #5. The rest of the steps happen in the background without impacting application writes and commits.
++
+>[!TIP]
+>With flexible server, you can optionally schedule Azure-initiated maintenance activities by choosing a 60-minute window on a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that window. If you don't choose a custom window, a system allocated 1-hr window between 11 pm - 7 am local time is selected for your server.
+
+>These Azure-initiated maintenance activities are also performed on the standby replica for flexible servers that are configured with availability zones.
++
+For a list of possible planned downtime events, see [Planned downtime events](/azure/postgresql/flexible-server/concepts-business-continuity#planned-downtime-events)
+
+#### Unplanned failover
+
+Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention.
+
+For information on unplanned failovers and downtime, including possible scenarios, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation).
++
+#### Failover testings (forced failover)
+
+With a forced failover, you can simulate an unplanned outage scenario while running your production workload and observe your application downtime. You can also use a forced failover when your primary server becomes unresponsive.
+
+A forced failover brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it is promoted to be the primary server. DNS records are updated, and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background, which doesn't impact the uptime.
+
+The following are the steps during forced failover:
+
+ | **Step** | **Description** | **App downtime expected?** |
+ | - | | -- |
+ | 1 | Primary server is stopped shortly after receiving the failover request. | Yes |
+ | 2 | Application encounters downtime as the primary server is down. | Yes |
+ | 3 | Internal monitoring system detects the failure and initiates a failover to the standby server. | Yes |
+ | 4 | Standby server enters recovery mode before being fully promoted as an independent server. | Yes |
+ | 5 | The failover process waits for the standby recovery to complete. | Yes |
+ | 6 | Once the server is up, the DNS record is updated with the same hostname but using the standby's IP address. | Yes |
+ | 7 | Application can reconnect to the new primary server and resume the operation. | No |
+ | 8 | A standby server in the preferred zone is established. | No |
+ | 9 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
+ | 10 | A steady state between the primary and the standby server is established. | No |
+ | 11 | Forced failover process is complete. | No |
+
+Application downtime is expected to start after step #1 and persists until step #6 is completed. The rest of the steps happen in the background without impacting the application writes and commits.
+
+>[!Important]
+>The end-to-end failover process includes (a) failing over to the standby server after the primary failure and (b) establishing a new standby server in a steady state. As your application incurs downtime until the failover to the standby is complete, **please measure the downtime from your application/client perspective** instead of the overall end-to-end failover process.
++
+#### Considerations while performing forced failovers
+
+* The overall end-to-end operation time may be seen as longer than the actual downtime experienced by the application.
+
+ >[!IMPORTANT]
+ > Always observe the downtime from the application perspective!
+
+* Don't perform immediate, back-to-back failovers. Wait for at least 15-20 minutes between failovers, allowing the new standby server to be fully established.
+
+* It's recommended that your perform a forced failover during a low-activity period to reduce downtime.
++
+### Zone-down experience
+
+**Zonal**. To recover from a zone-level failure, you can [perform point-in-time restore](#point-in-time-restore-of-high-availability-servers) using the backup. You can choose a custom restore point with the latest time to restore the latest data. A new flexible server is deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover.
+
+For more information on point-in-time restore see [Backup and restore in Azure Database for PostgreSQL-Flexible Server]
+(/azure/postgresql/flexible-server/concepts-backup-restore).
+
+**Zone-redundant**. Flexible server is automatically failed over to the standby server within 60-120s with zero data loss.
++
+## Configurations without availability zones
+
+Although it's not recommended, you can configure you flexible server without high availability enabled. For flexible servers configured without high availability, the service provides local redundant storage with three copies of data, zone-redundant backup (in regions where it is supported), and built-in server resiliency to automatically restart a crashed server and relocate the server to another physical node. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. During planned or unplanned failover events, if the server goes down, the service maintains the availability of the servers using the following automated procedure:
+
+1. A new compute Linux VM is provisioned.
+2. The storage with data files is mapped to the new virtual machine
+3. PostgreSQL database engine is brought online on the new virtual machine.
+
+The picture below shows the transition between VM and storage failure.
++
+## Disaster recovery: cross-region failover
+
+In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
+
+### Cross-region disaster recovery in multi-region geography
+
+#### Geo-redundant backup and restore
+
+Geo-redundant backup and restore provides the ability to restore your server in a different region in the event of a disaster. It also provides at least 99.99999999999999 percent (16 nines) durability of backup objects over a year.
+
+Geo-redundant backup can be configured only at the time of server creation. When the server is configured with geo-redundant backup, the backup data and transaction logs are copied to the paired region asynchronously through storage replication.
+
+For more information on geo-redundant backup and restore, see [geo-redundant backup and restore](/azure/postgresql/flexible-server/concepts-backup-restore#geo-redundant-backup-and-restore).
+
+#### Read replicas
+
+Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers.
+
+For more information on on read replica features and considerations, see [Read replicas](/azure/postgresql/flexible-server/concepts-read-replicas).
+
+#### Outage detection, notification, and management
+
+If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server is provisioned and recovered to the last available data that was copied to this region.
+
+You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure.
+
+For more information on unplanned downtime mitigation as well as recovery after regional disaster, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation).
+++++
+## Next steps
+> [!div class="nextstepaction"]
+> [Azure Database for PostgreSQL documentation](/azure/postgresql/)
+
+> [!div class="nextstepaction"]
+> [Reliability in Azure](availability-zones-overview.md)
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
For migrating existing virtual machine resources to a zone redundant configurati
In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
-Customers can use Cross Region to restore Azure VMs via paired regions. You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more details on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options).
-
+You can use Cross Region restore to restore Azure VMs via paired regions. With Cross Region restore, you can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more details on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options).
### Cross-region disaster recovery in multi-region geography
-While Microsoft is working diligently to restore the virtual machine service for region-wide service disruptions, customers will have to rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan).
+In the case of a region-wide service disruption, Microsoft works diligently to restore the virtual machine service. However, you will still have to rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan).
#### Outage detection, notification, and management
-When the hardware or the physical infrastructure for the virtual machine fails unexpectedly. This can include local network failures, local disk failures, or other rack level failures. When detected, the Azure platform automatically migrates (heals) your virtual machine to a healthy physical machine in the same data center. During the healing procedure, virtual machines experience downtime (reboot) and in some cases loss of the temporary drive. The attached OS and data disks are always preserved.
+Hardware or physical infrastructure for the virtual machine may fail unexpectedly. Unexpected failures can include local network failures, local disk failures, or other rack level failures. When detected, the Azure platform automatically migrates (heals) your virtual machine to a healthy physical machine in the same data center. During the healing procedure, virtual machines experience downtime (reboot) and in some cases loss of the temporary drive. The attached OS and data disks are always preserved.
For more detailed information on virtual machine service disruptions, see [disaster recovery guidance](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance).
When setting up disaster recovery for virtual machines, understand what [Azure S
### Single-region geography disaster recovery
-With disaster recovery set up, Azure VMs will continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there.
+With disaster recovery set up, Azure VMs continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there.
When you replicate Azure VMs using [Site Recovery](../site-recovery/site-recovery-overview.md), all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. For more information, see [Run a disaster recovery drill to Azure](../site-recovery/tutorial-dr-drill-azure.md).
For deploying virtual machines, customers can use [flexible orchestration](../vi
## Next steps > [!div class="nextstepaction"] > [Resiliency in Azure](/azure/reliability/availability-zones-overview)++
remote-rendering System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/system-requirements.md
Even though the correct H265 codec might be installed, security properties on th
### Desktop Windows
-32-Bit Windows versions are not supported.
+Requirements and limitations:
-You need to install the latest [Microsoft Visual C++ Redistributable package](https://aka.ms/vs/17/release/vc_redist.x64.exe) to be able to run any Azure Remote Rendering application.
+* 32-Bit Windows versions are not supported.
+* You need to install the latest [Microsoft Visual C++ Redistributable package](https://aka.ms/vs/17/release/vc_redist.x64.exe) to be able to run any Azure Remote Rendering application.
+* No VR support. Only the [simulation](../concepts/graphics-bindings.md#simulation) graphics binding is supported.
+* Only the DX11 rendering API is supported.
It's important to use the latest HEVC codec, as newer versions have significant improvements in latency. To check which version is installed on your device:
It's important to use the latest HEVC codec, as newer versions have significant
### HoloLens 2
-> [!NOTE]
-> The [render from PV camera](/windows/mixed-reality/mixed-reality-capture-for-developers#render-from-the-pv-camera-opt-in) feature isn't supported.
+Requirements and limitations:
+
+* Both Unity's [OpenXR](https://docs.unity3d.com/Manual/com.unity.xr.openxr.html) (Unity 2020 or newer) and [Windows XR Plugin](https://docs.unity3d.com/2020.3/Documentation/Manual/com.unity.xr.windowsmr.html) (up to Unity 2020 only) are supported.
+* The [render from PV camera](/windows/mixed-reality/mixed-reality-capture-for-developers#render-from-the-pv-camera-opt-in) feature isn't supported.
+* Only the DX11 rendering API is supported.
### Quest 2 and Quest Pro
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Microsoft Sentinel Contributor [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/analytics/query/action | Search using new engine. | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/*/read | View log analytics data | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/* | |
-> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution |
+> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get existing OMS solution |
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. |
Microsoft Sentinel Reader [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/*/read | View log analytics data | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/LinkedServices/read | Get linked services under given workspace. | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/read | Gets a saved search query. |
-> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution |
+> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get existing OMS solution |
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | |
Microsoft Sentinel Responder [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/*/read | View log analytics data | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/read | Gets a saved search query. |
-> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution |
+> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get existing OMS solution |
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. |
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Previously updated : 05/09/2023 Last updated : 08/10/2023 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
The following table lists the supported environment attributes for conditions.
| Display name | Description | Attribute | Type | | | | | |
-| Subnet<sup>1</sup> | Use this attribute in conditions to restrict access from a specific subnet. | `Microsoft.Network/virtualNetworks/subnets` | [String](#string-comparison-operators) |
-| Private endpoint<sup>2</sup> | Use this attribute in conditions to restrict access over a specific private endpoint. | `Microsoft.Network/privateEndpoints` | [String](#string-comparison-operators) |
-| Is private link | Use this attribute in conditions to require access over any private link. | `isPrivateLink` | [Boolean](#boolean-comparison-operators) |
+| [Is private link](../storage/blobs/storage-auth-abac-attributes.md#is-private-link)<sup>1</sup> | Use this attribute in conditions to require access over any private link. | `isPrivateLink` | [Boolean](#boolean-comparison-operators) |
+| [Private endpoint](../storage/blobs/storage-auth-abac-attributes.md#private-endpoint)<sup>1,2</sup> | Use this attribute in conditions to restrict access over a specific private endpoint. | `Microsoft.Network/privateEndpoints` | [String](#string-comparison-operators) |
+| [Subnet](../storage/blobs/storage-auth-abac-attributes.md#subnet)<sup>1,3</sup> | Use this attribute in conditions to restrict access from a specific subnet. | `Microsoft.Network/virtualNetworks/subnets` | [String](#string-comparison-operators) |
| UTC now | Use this attribute in conditions to restrict access to objects during specific time periods. | `UtcNow` | [DateTime](#datetime-comparison-operators) |
-<sup>1</sup> You can only use the **Subnet** attribute if you currently have at least one virtual network subnet configured in your subscription.
-
-<sup>2</sup> You can only use the **Private endpoint** attribute if you currently have at least one private endpoint configured in your subscription.
+<sup>1</sup> For copy operations, the `Is private link`, `Private endpoint`, and `Subnet` attributes only apply to the destination, such a storage account, not the source. For more information about the copy operations this applies to, select each attribute in the table to see more details.<br />
+<sup>2</sup> You can only use the `Private endpoint` attribute if you currently have at least one private endpoint configured in your subscription.<br />
+<sup>3</sup> You can only use the `Subnet` attribute if you currently have at least one virtual network subnet configured in your subscription.<br />
#### Principal attributes
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
na Previously updated : 08/08/2023 Last updated : 08/09/2023
The following diagram is a high-level view of how the Azure roles, Azure AD role
| Azure role | Permissions | Notes | | | | |
-| [Owner](built-in-roles.md#owner) | <ul><li>Full access to all resources</li><li>Delegate access to others</li></ul> | The Service Administrator and Co-Administrators are assigned the Owner role at the subscription scope<br>Applies to all resource types. |
-| [Contributor](built-in-roles.md#contributor) | <ul><li>Create and manage all of types of Azure resources</li><li>Can't grant access to others</li></ul> | Applies to all resource types. |
+| [Owner](built-in-roles.md#owner) | <ul><li>Grants full access to manage all resources</li><li>Assign roles in Azure RBAC</li></ul> | The Service Administrator and Co-Administrators are assigned the Owner role at the subscription scope<br>Applies to all resource types. |
+| [Contributor](built-in-roles.md#contributor) | <ul><li>Grants full access to manage all resources</li><li>Can't assign roles in Azure RBAC</li><li>Can't manage assignments in Azure Blueprints or share image galleries</li></ul> | Applies to all resource types. |
| [Reader](built-in-roles.md#reader) | <ul><li>View Azure resources</li></ul> | Applies to all resource types. |
-| [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li></ul> | |
+| [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li></ul> | |
The rest of the built-in roles allow management of specific Azure resources. For example, the [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor) role allows the user to create and manage virtual machines. For a list of all the built-in roles, see [Azure built-in roles](built-in-roles.md).
role-based-access-control Role Assignments Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md
Previously updated : 05/10/2023 Last updated : 08/09/2023
Job function roles allow management of specific Azure resources. For example, th
Privileged administrator roles are roles that grant privileged administrator access, such as the ability to manage Azure resources or assign roles to other users. The following roles are considered privileged and apply to all resource types.
-| Role | Description |
+| Azure role | Permissions |
| | |
-| [Owner](built-in-roles.md#owner) | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. |
-| [Contributor](built-in-roles.md#contributor) | Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. |
-| [User Access Administrator](built-in-roles.md#user-access-administrator) | Lets you manage user access to Azure resources. |
+| [Owner](built-in-roles.md#owner) | <ul><li>Grants full access to manage all resources</li><li>Assign roles in Azure RBAC</li></ul> |
+| [Contributor](built-in-roles.md#contributor) | <ul><li>Grants full access to manage all resources</li><li>Can't assign roles in Azure RBAC</li><li>Can't manage assignments in Azure Blueprints or share image galleries</li></ul> |
+| [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li></ul> |
It's a best practice to grant users the least privilege to get their work done. You should avoid assigning a privileged administrator role when a job function role can be assigned instead. If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource, instead of a broader scope, such as management group or subscription.
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Title: Azure Route Server support for ExpressRoute and Azure VPN
+ Title: Support for ExpressRoute and Azure VPN
+ description: Learn about how Azure Route Server exchanges routes between network virtual appliances (NVA), Azure ExpressRoute gateways, and Azure VPN gateways.- -- Previously updated : 06/05/2023 ++ Last updated : 08/14/2023
Azure Route Server supports not only third-party network virtual appliances (NVA) running on Azure but also seamlessly integrates with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](quickstart-configure-route-server-portal.md#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](quickstart-configure-route-server-powershell.md#route-exchange) or [Azure CLI](quickstart-configure-route-server-cli.md#configure-route-exchange) to enable the route exchange with the Route Server.
-> [!WARNING]
-> When you create or delete an Azure Route Server in a virtual network that contains a virtual network gateway (ExpressRoute or VPN), expect downtime until the operation is complete. If you have an ExpressRoute circuit connected to the virtual network where you're creating or deleting the Route Server, the downtime doesn't affect the ExpressRoute circuit or its connections to other virtual networks.
## How does it work?
For example, in the following diagram:
:::image type="content" source="./media/expressroute-vpn-support/expressroute-with-route-server.png" alt-text="Diagram showing ExpressRoute gateway and SDWAN NVA exchanging routes through Azure Route Server.":::
-You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN gateway and ExpressRoute are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other.
+You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN and ExpressRoute gateways are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other.
> [!IMPORTANT]
-> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515.
->
+> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not necessary to have BGP enabled on the VPN gateway.
:::image type="content" source="./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png" alt-text="Diagram showing ExpressRoute gateway and VPN gateways exchanging routes through Azure Route Server."::: > [!IMPORTANT] > When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred.
->
## Next steps
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-cli.md
Title: 'Quickstart: Create and configure Route Server - Azure CLI' description: In this quickstart, you learn how to create and configure an Azure Route Server using Azure CLI.- + Previously updated : 09/01/2021-- Last updated : 08/14/2023+ ms.devlang: azurecli # Quickstart: Create and configure Route Server using Azure CLI
-This article helps you configure Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using Azure PowerShell. Route Server will learn routes from your NVA and program them on the virtual machines in the virtual network. Azure Route Server will also advertise the virtual network routes to the NVA. For more information, see [Azure Route Server](overview.md).
+This article helps you configure Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using Azure PowerShell. Route Server learns routes from your NVA and program them on the virtual machines in the virtual network. Azure Route Server will also advertise the virtual network routes to the NVA. For more information, see [Azure Route Server](overview.md).
:::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure CLI." border="false":::
This article helps you configure Azure Route Server to peer with a Network Virtu
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Install the latest Azure CLI](/cli/azure/install-azure-cli), or make sure you can use [Azure Cloud Shell](../cloud-shell/quickstart.md) in the portal.
-* Review the [service limits for Azure Route Server](route-server-faq.md#limitations).
-
-## Sign in to your Azure account and select your subscription.
-
-To begin your configuration, sign in to your Azure account. If you use the Cloud Shell "Try It", you're signed in automatically. Use the following examples to help you connect:
-
-```azurecli-interactive
-az login
-```
-
-Check the subscriptions for the account.
-
-```azurecli-interactive
-az account list
-```
-
-Select the subscription for which you want to create an ExpressRoute circuit.
-
-```azurecli-interactive
-az account set \
- --subscription "<subscription ID>"
-```
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
+- Review the [service limits for Azure Route Server](route-server-faq.md#limitations).
## Create a resource group and a virtual network
az network vnet create \
### Add a dedicated subnet
-Azure Route Server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or short prefix (such as /26 or /25) or you'll receive an error message when deploying the Route Server. Create a subnet configuration named **RouteServerSubnet** with [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create):
+Azure Route Server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or shorter prefix (such as /26 or /25), otherwise you may receive an error message when deploying the Route Server. Create a subnet configuration named **RouteServerSubnet** with [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create):
1. Run the following command to add the *RouteServerSubnet* to your virtual network.
az network routeserver peering create \
--resource-group myRouteServerRG ```
-To set up peering with a different NVA or another instance of the same NVA for redundancy, use the same command as above with different *PeerName*, *PeerIp*, and *PeerAsn*.
+To set up peering with a different NVA or another instance of the same NVA for redundancy, use the previous command with different *PeerName*, *PeerIp*, and *PeerAsn*.
## Complete the configuration on the NVA
If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual
[!INCLUDE [VPN gateway note](../../includes/route-server-note-vpn-gateway.md)]
-1. To enable route exchange between Azure Route Server and the gateway(s), use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic`` flag set to **true**:
+
+1. To enable route exchange between Azure Route Server and the gateway(s), use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic` flag set to **true**:
```azurecli-interactive az network routeserver update \
If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual
--allow-b2b-traffic true ```
-2. To disable route exchange between Azure Route Server and the gateway(s), use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic`` flag set to **false**:
+2. To disable route exchange between Azure Route Server and the gateway(s), use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic` flag set to **false**:
```azurecli-interactive az network routeserver update \
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
Title: 'Quickstart: Create and configure Route Server - Azure portal' description: In this quickstart, you learn how to create and configure an Azure Route Server using the Azure portal.- + Previously updated : 07/19/2022- Last updated : 08/11/2022
If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual
[!INCLUDE [VPN gateway note](../../includes/route-server-note-vpn-gateway.md)]
-1. Go to [Route Server](./overview.md) in the Azure portal and select the Route Server you want to configure.
+
+1. Go to the Route Server that you want to configure.
-1. Select **Configuration** under *Settings* in the left navigation panel.
+1. Select **Configuration** under **Settings** in the left navigation panel.
1. Select **Enable** for the **Branch-to-Branch** setting and then select **Save**.
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
Title: 'Quickstart: Create and configure Route Server - Azure PowerShell' description: In this quickstart, you learn how to create and configure an Azure Route Server using Azure PowerShell.- Previously updated : 04/06/2023- + Last updated : 08/11/2023
If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual
[!INCLUDE [VPN gateway note](../../includes/route-server-note-vpn-gateway.md)] + 1. To enable route exchange between Azure Route Server and the gateway(s), use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) with the *-AllowBranchToBranchTraffic* flag: ```azurepowershell-interactive
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Title: Azure Route Server frequently asked questions (FAQ) description: Find answers to frequently asked questions about Azure Route Server.- -- Previously updated : 02/23/2023 -++ Last updated : 08/14/2023 # Azure Route Server frequently asked questions (FAQ)
-## What is Azure Route Server?
+## General
+
+### What is Azure Route Server?
Azure Route Server is a fully managed service that allows you to easily manage routing between your network virtual appliance (NVA) and your virtual network.
Azure Route Server is a fully managed service that allows you to easily manage r
No. Azure Route Server is a service designed with high availability. Your route server has zone-level redundancy if you deploy it in an Azure region that supports [Availability Zones](../availability-zones/az-overview.md).
-### How many route servers can I create in a virtual network?
-
-You can create only one route server in a virtual network. You must deploy the route server in a dedicated subnet called *RouteServerSubnet*.
-
-### Does Azure Route Server support virtual network peering?
-
-Yes, if you peer a virtual network hosting the Azure Route Server to another virtual network and you enable **Use the remote virtual network's gateway or Route Server** on the second virtual network, Azure Route Server learns the address spaces of the peered virtual network and send them to all the peered network virtual appliances (NVAs). It also programs the routes from the NVAs into the route table of the virtual machines in the peered virtual network.
--
-### <a name = "protocol"></a>What routing protocols does Azure Route Server support?
-
-Azure Route Server supports only Border Gateway (BGP) Protocol. Your network virtual appliance (NVA) must support multi-hop external BGP because you need to deploy the Route Server in a dedicated subnet in your virtual network. When you configure the BGP on your NVA, the ASN you choose must be different from the Route Server ASN.
-
-### Does Azure Route Server route data traffic between my NVA and my VMs?
+### Do I need to peer each NVA with both Azure Route Server instances?
-No. Azure Route Server only exchanges BGP routes with your network virtual appliance (NVA). The data traffic goes directly from the NVA to the destination virtual machine (VM) and directly from the VM to the NVA.
+Yes, to ensure that virtual network routes are successfully advertised over the target NVA connections, and to configure High Availability, we recommend peering each NVA instance with both instances of Route Server.
### Does Azure Route Server store customer data? No. Azure Route Server only exchanges BGP routes with your network virtual appliance (NVA) and then propagates them to your virtual network.
+### Does Azure Route Server support virtual network peering?
+
+Yes, if you peer a virtual network hosting the Azure Route Server to another virtual network and you enable **Use the remote virtual network's gateway or Route Server** on the second virtual network, Azure Route Server learns the address spaces of the peered virtual network and send them to all the peered network virtual appliances (NVAs). It also programs the routes from the NVAs into the route table of the virtual machines in the peered virtual network.
+ ### Why does Azure Route Server require a public IP address? Azure Router Server needs to ensure connectivity to the backend service that manages the Route Server configuration, that's why it needs the public IP address. This public IP address doesn't constitute a security exposure of your virtual network.
Azure Router Server needs to ensure connectivity to the backend service that man
No. We'll add IPv6 support in the future.
-### If Azure Route Server receives the same route from more than one NVA, how does it handle them?
+## Routing
-If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the virtual machines (VMs) in the virtual network. When a VM sends traffic to the destination of this route, the VM host uses Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
+### Does Azure Route Server route data traffic between my NVA and my VMs?
-### Does Azure Route Server preserve the BGP AS Path of the route it receives?
+No. Azure Route Server only exchanges BGP routes with your network virtual appliance (NVA). The data traffic goes directly from the NVA to the destination virtual machine (VM) and directly from the VM to the NVA.
-Yes, Azure Route Server propagates the route with the BGP AS Path intact.
+### <a name = "protocol"></a>What routing protocols does Azure Route Server support?
-### Do I need to peer each NVA with both Azure Route Server instances?
+Azure Route Server supports only Border Gateway (BGP) Protocol. Your network virtual appliance (NVA) must support multi-hop external BGP because you need to deploy the Route Server in a dedicated subnet in your virtual network. When you configure the BGP on your NVA, the ASN you choose must be different from the Route Server ASN.
-Yes, to ensure that virtual network routes are successfully advertised over the target NVA connections, and to configure High Availability, we recommend peering each NVA instance with both instances of Route Server.
+### Does Azure Route Server preserve the BGP AS Path of the route it receives?
+
+Yes, Azure Route Server propagates the route with the BGP AS Path intact.
### Does Azure Route Server preserve the BGP communities of the route it receives?
Yes, Azure Route Server propagates the route with the BGP communities as is.
Azure Route Server Keepalive timer is 60 seconds and the Hold timer is 180 seconds.
+### Can Azure Route Server filter out routes from NVAs?
+
+Azure Route Server supports ***NO_ADVERTISE*** BGP community. If a network virtual appliance (NVA) advertises routes with this community string to the route server, the route server doesn't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes sent from Azure Route Server to ExpressRoute.
+ ### What Autonomous System Numbers (ASNs) can I use? You can use your own public ASNs or private ASNs in your network virtual appliance (NVA). You can't use ASNs reserved by Azure or IANA.
You can use your own public ASNs or private ASNs in your network virtual applian
No, Azure Route Server supports only 16-bit (2 bytes) ASNs.
-### Can I associate a UDR to the *RouteServerSubnet*?
+### If Azure Route Server receives the same route from more than one NVA, how does it handle them?
-No, Azure Route Server doesn't support configuring a user defined route (UDR) on the *RouteServerSubnet*. Azure Route Server doesn't route any data traffic between network virtual appliances (NVAs) and virtual machines (VMs).
+If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the virtual machines (VMs) in the virtual network. When a VM sends traffic to the destination of this route, the VM host uses Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
-### Can I associate a network security group (NSG) to the RouteServerSubnet?
+### Does Azure Route Server exchange routes by default between NVAs and the virtual network gateways (VPN or ExpressRoute)?
-No, Azure Route Server doesn't support NSG association to the RouteServerSubnet.
+No. By default, Azure Route Server doesn't propagate routes it receives from an NVA and a virtual network gateway to each other. The Route Server exchanges these routes after you enable **branch-to-branch** in it.
### When the same route is learned over ExpressRoute, VPN or SDWAN, which network is preferred? ExpressRoute is preferred over VPN or SDWAN.
-### Can I peer two route servers in two peered virtual networks and enable the NVAs connected to the route servers to talk to each other?
+### What are the requirements for an Azure VPN gateway to work with Azure Route Server?
+
+Azure VPN gateway must be configured in active-active mode and have the ASN set to 65515.
+
+### Do I need to enable BGP on the VPN gateway?
+
+No. It's not a requirement to have BGP enabled on the VPN gateway to communicate with the Route Server.
+
+### Can I peer two Azure Route Servers in two peered virtual networks and enable the NVAs connected to the Route Servers to talk to each other?
***Topology: NVA1 -> RouteServer1 -> (via VNet Peering) -> RouteServer2 -> NVA2***
-No, Azure Route Server doesn't forward data traffic. To enable transit connectivity through the NVA, set up a direct connection (for example, an IPsec tunnel) between the NVAs and use the route servers for dynamic route propagation.
+No, Azure Route Server doesn't forward data traffic. To enable transit connectivity through the NVA, set up a direct connection (for example, an IPsec tunnel) between the NVAs and use the Route Servers for dynamic route propagation.
### Can I use Azure Route Server to direct traffic between subnets in the same virtual network to flow inter-subnet traffic through the NVA?
No. Azure Route Server uses BGP to advertise routes. System routes for traffic r
You can still use Route Server to direct traffic between subnets in different virtual networks to flow using the NVA. A possible design that may work is one subnet per "spoke" virtual network and all "spoke" virtual networks are peered to a "hub" virtual network. This design is very limiting and needs to take into scaling considerations and Azure's maximum limits on virtual networks vs subnets.
-### Can Azure Route Server filter out routes from NVAs?
-
-Azure Route Server supports ***NO_ADVERTISE*** BGP community. If a network virtual appliance (NVA) advertises routes with this community string to the route server, the route server doesn't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes sent from Azure Route Server to ExpressRoute.
- ### Can Azure Route Server provide transit between ExpressRoute and a Point-to-Site (P2S) VPN gateway connection when enabling the *branch-to-branch*? No, Azure Route Server provides transit only between ExpressRoute and Site-to-Site (S2S) VPN gateway connections (when enabling the *branch-to-branch* setting).
+## Limitations
+
+### How many Azure Route Servers can I create in a virtual network?
+
+You can create only one Route Server in a virtual network. You must deploy the route server in a dedicated subnet called *RouteServerSubnet*.
+
+### Can I associate a UDR to the *RouteServerSubnet*?
+
+No, Azure Route Server doesn't support configuring a user defined route (UDR) on the ***RouteServerSubnet*** subnet. Azure Route Server doesn't route any data traffic between network virtual appliances (NVAs) and virtual machines (VMs).
+
+### Can I associate a network security group (NSG) to the *RouteServerSubnet*?
+
+No, Azure Route Server doesn't support network security group association to the ***RouteServerSubnet*** subnet.
+ ### <a name = "limitations"></a>What are Azure Route Server limits? Azure Route Server has the following limits (per deployment).
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
It's currently not possible to perform this action from Azure DevOps.
## Deploy the control plane
-The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder.
+The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder.
-The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
+The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
-Running the following command creates the Deployer, the SAP Library and adds the Service Principal details to the deployment key vault. If you followed the web app setup in the step above, this command will also create the infrastructure to host the application.
+Running the following command creates the Deployer, the SAP Library and adds the Service Principal details to the deployment key vault. If you followed the web app setup in the previous step, this command also creates the infrastructure to host the application.
# [Linux](#tab/linux)
Run the following command to deploy the control plane:
```bash az logout
-az login
-cd ~/Azure_SAP_Automated_Deployment/samples/WORKSPACES
-
- export subscriptionId="<subscriptionId>"
- export spn_id="<appId>"
- export spn_secret="<password>"
- export tenant_id="<tenantId>"
- export env_code="MGMT"
- export region_code="WEEU"
-
- export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
- export ARM_SUBSCRIPTION_ID="${subscriptionId}"
- export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES"
- export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
--
- ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
- --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
- --library_parameter_file "LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" \
- --subscription "${subscriptionId}" \
- --spn_id "${spn_id}" \
- --spn_secret "${spn_secret}" \
- --tenant_id "${tenant_id}" \
- --auto-approve
+cd ~/Azure_SAP_Automated_Deployment
+cp -Rp samples/Terraform/WORKSPACES config
+cd config/WORKSPACES
+
+export ARM_SUBSCRIPTION_ID="<subscriptionId>"
+export ARM_CLIENT_ID="<appId>"
+export ARM_CLIENT_SECRET="<password>"
+export ARM_TENANT_ID="<tenantId>"
+export env_code="MGMT"
+export region_code="WEEU"
+export vnet_code="WEEU"
+
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
++
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+="${subscriptionId}"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
++
+sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
+ --deployer_parameter_file "${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" \
+ --library_parameter_file "${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" \
+ --subscription "${ARM_SUBSCRIPTION_ID}" \
+ --spn_id "${ARM_CLIENT_ID}" \
+ --spn_secret "${ARM_CLIENT_SECRET}" \
+ --tenant_id "${ARM_TENANT_ID}" \
+ --auto-approve
``` + # [Windows](#tab/windows)
-You can't perform this action from Windows
+You can't perform a control plane deployment from Windows.
# [Azure DevOps](#tab/devops) Open (https://dev.azure.com) and go to your Azure DevOps project.
sap Deploy System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-system.md
You can copy the sample configuration files to start testing the deployment auto
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -Rp sap-automation/deploy/samples/WORKSPACES WORKSPACES
+cp -Rp sap-automation/deploy/samples/WORKSPACES config
``` ```bash+
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+ cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01
-${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \
+${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/installer.sh \
--parameterfile DEV-WEEU-SAP01-X01.tfvars \
- --type sap_system
+ --type sap_system --auto-approve
``` # [Windows](#tab/windows)
sap Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-workload-zone.md
An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. The [SAP on Azure Deployment Automation Framework](deployment-framework.md) refers to these tiers as [workload zones](deployment-framework.md#deployment-components).
-You can use workload zones in multiple Azure regions. Each workload zone then has its own Azure Virtual Network (Azure VNet)
+You can use workload zones in multiple Azure regions. Each workload zone then has its own Azure Virtual Network (Azure virtual network)
The following services are provided by the SAP workload zone:
The following services are provided by the SAP workload zone:
- Azure Key Vault, for system credentials. - Storage account for boot diagnostics - Storage account for cloud witnesses
+- Azure NetApp account and capacity pools (optional)
+- Azure Files NFS Shares (optional)
:::image type="content" source="./media/deployment-framework/workload-zone.png" alt-text="Diagram SAP Workload Zone."::: The workload zones are typically deployed in spokes in a hub and spoke architecture. They may be in their own subscriptions.
-Supports the Private DNS from the Control Plane.
+Supports the Private DNS from the Control Plane or from a configurable source.
## Core configuration
az role assignment create --assignee <appId> \
## Deploying the SAP Workload zone
-The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
+The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
-Running the command below will deploy the SAP Workload Zone.
+Running the following command deploys the SAP Workload Zone.
# [Linux](#tab/linux)
You can copy the sample configuration files to start testing the deployment auto
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -R sap-automation/samples/WORKSPACES WORKSPACES
+cp -R sap-automation/samples/WORKSPACES config
``` ```bash
-export subscriptionId="<subscriptionId>"
-export spn_id="<appId>"
-export spn_secret="<password>"
-export tenant_id="<tenantId>"
-export env_code="MGMT"
-export region_code="<region_code>"
+export ARM_SUBSCRIPTION_ID="<subscriptionId>"
+export ARM_CLIENT_ID="<appId>"
+export ARM_CLIENT_SECRET="<password>"
+export ARM_TENANT_ID="<tenantId>"
+export env_code="DEV"
+export region_code="<region_code>"
+export vnet_code="SAP02"
+export deployer_environment="MGMT"
-export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-export ARM_SUBSCRIPTION_ID="${subscriptionId}"
-
-${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \
- --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
- --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \
- --subscription "${subscriptionId}" \
- --spn_id "${spn_id}" \
- --spn_secret "${spn_secret}" \
- --tenant_id "${tenant_id}" \
- --auto-approve
-```
-# [Windows](#tab/windows)
-
-You can copy the sample configuration files to start testing the deployment automation framework.
-
-```powershell
-
-cd C:\Azure_SAP_Automated_Deployment
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
-xcopy sap-automation\samples\WORKSPACES WORKSPACES
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+
+cd "${CONFIG_REPO_PATH}/LANDSCAPE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE"
+parameterFile="${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
+
+$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
+ --parameterfile "${parameterFile}" \
+ --deployer_environment "${deployer_environment}" \
+ --subscription "${ARM_SUBSCRIPTION_ID}" \
+ --spn_id "${ARM_CLIENT_ID}" \
+ --spn_secret "${ARM_CLIENT_SECRET}" \
+ --tenant_id "${ARM_TENANT_ID}" \
+ --auto-approve
+
```
+# [Windows](#tab/windows)
-
-```powershell
-$subscription="<subscriptionID>"
-$spn_id="<appID>"
-$spn_secret="<password>"
-$tenant_id="<tenant>"
-$keyvault=<keyvaultName>
-$storageaccount=<storageaccountName>
-$statefile_subscription=<statefile_subscription>
-$region_code="WEEU"
-
-cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\DEV-$region_code-SAP01-INFRASTRUCTURE
-
-New-SAPWorkloadZone -Parameterfile DEV-$region_code-SAP01-INFRASTRUCTURE.tfvars
--Subscription $subscription -SPN_id $spn_id -SPN_password $spn_secret -Tenant_id $tenant_id--State_subscription $statefile_subscription -Vault $keyvault -$StorageAccountName $storageaccount
-```
-
+It isn't possible to perform the deployment from Windows.
> [!NOTE]
Open (https://dev.azure.com) and go to your Azure DevOps Services project.
> [!NOTE] > Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
-The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE' folder.
+The deployment uses the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE' folder.
Run the pipeline by selecting the _Deploy workload zone_ pipeline from the Pipelines section. Enter the workload zone configuration name and the deployer environment name. Use 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the Workload zone configuration name and 'MGMT' as the Deployer Environment Name.
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample con
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config/WORKSPACES
+cp -Rp samples/Terraform/WORKSPACES config
```
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
This playbooks downloads the installation media from the control plane to the in
# [Linux](#tab/linux) The following tasks are executed on the Central services instance virtual machine:-- Download the software
+- Download the software from the storage account and make it available for the other virtual machines
# [Windows](#tab/windows) The following tasks are executed on the Central services instance virtual machine:-- Download the software
+- Download the software from the storage account and make it available for the other virtual machines
sap Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations.md
Deploy the VMs in Azure by using:
- Azure PowerShell cmdlets. - The Azure CLI.
-You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com/). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md) or with the automation released on [GitHub](https://github.com/AzureCAT-GSI/SAP-HANA-ARM).
+You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com/). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md).
>[!IMPORTANT] > In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image. For more information, see [Memory optimized virtual machine sizes](../../virtual-machines/mv2-series.md).
sap High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-azure-files-smb.md
The following screenshot shows the technical information to validate a successfu
* [Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure](./sap-high-availability-installation-wsfc-file-share.md) * [Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md) * [Add a probe port in an ASCS cluster configuration](sap-high-availability-installation-wsfc-file-share.md)
-* [Installation of an (A)SCS Instance on a Failover Cluster with no Shared Disks](https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html) (SAP documentation)
[16083]:https://launchpad.support.sap.com/#/notes/16083 [2273806]:https://launchpad.support.sap.com/#/notes/2273806
sap High Availability Guide Windows Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-netapp-files-smb.md
Read the following SAP Notes and papers first:
* [Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure](./sap-high-availability-installation-wsfc-file-share.md) * [Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md) * [Add probe port in ASCS cluster configuration](sap-high-availability-installation-wsfc-file-share.md)
-* [Installation of an (A)SCS Instance on a Failover Cluster](https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html)
* [Create an SMB volume for Azure NetApp Files](../../azure-netapp-files/create-active-directory-connections.md#requirements-for-active-directory-connections) * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
sap Sap Ascs Ha Multi Sid Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-file-share.md
Install DBMS and SAP application Servers as described earlier.
[2287140]:https://launchpad.support.sap.com/#/notes/2287140 [2492395]:https://launchpad.support.sap.com/#/notes/2492395
-[sap-official-ha-file-share-document]:https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html
[s2d-in-win-2016]:/windows-server/storage/storage-spaces/storage-spaces-direct-overview [sofs-overview]:https://technet.microsoft.com/library/hh831349(v=ws.11).aspx [new-in-win-2016-storage]:/windows-server/storage/whats-new-in-storage [sap-installation-guides]:http://service.sap.com/instguides
-[sap-installation-guides-file-share]:https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html
[networking-limits-azure-resource-manager]:../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits [azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md [azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md
sap Sap High Availability Guide Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-file-share.md
Before you begin the tasks that are described in this article, review the follow
* SAP Note [2802770](https://launchpad.support.sap.com/#/notes/2802770) has troubleshooting information for the slow running SAP transaction AL11 on Windows 2012 and 2016. * SAP Note [1911507](https://launchpad.support.sap.com/#/notes/1911507) has information about transparent failover feature for a file share on Windows Server with the SMB 3.0 protocol. * SAP Note [662452](https://launchpad.support.sap.com/#/notes/662452) has recommendation(deactivating 8.3 name generation) to address Poor file system performance/errors during data accesses.
-* [Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure](./sap-high-availability-installation-wsfc-file-share.md)
-* [Installation of an (A)SCS Instance on a Failover Cluster](https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html)
+* [Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure](./sap-high-availability-installation-wsfc-file-share.md)
> [!NOTE] > Clustering SAP ASCS/SCS instances by using a file share is supported for SAP systems with SAP Kernel 7.22 (and later). For details see SAP note [2698948](https://launchpad.support.sap.com/#/notes/2698948)
sap Sap High Availability Installation Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-file-share.md
[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios-both-nodes]:sap-high-availability-infrastructure-wsfc-shared-disk.md#dd41d5a2-8083-415b-9878-839652812102 [sap-high-availability-infrastructure-wsfc-shared-disk-setup-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#d9c1fc8e-8710-4dff-bec2-1f535db7b006
-[sap-official-ha-file-share-document]:https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html
---- [sap-ha-guide-figure-1000]:./media/virtual-machines-shared-sap-high-availability-guide/1000-wsfc-for-sap-ascs-on-azure.png [sap-ha-guide-figure-1001]:./media/virtual-machines-shared-sap-high-availability-guide/1001-wsfc-on-azure-ilb.png [sap-ha-guide-figure-1002]:./media/virtual-machines-shared-sap-high-availability-guide/1002-wsfc-sios-on-azure-ilb.png
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
The [Azure Search Python samples](https://github.com/Azure-Samples/azure-search-
### Passing images to custom skills
-For scenarios where you require a custom skill to work on images, you can pass images to the custom skill, and have it return text or images. The [Python sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Image-Processing) image-processing demonstrates the workflow. The following skillset is from the sample.
+For scenarios where you require a custom skill to work on images, you can pass images to the custom skill, and have it return text or images. The following skillset is from a sample.
The following skillset takes the normalized image (obtained during document cracking), and outputs slices of the image.
def base64EncodeImage(image):
+ [Text merge skill](cognitive-search-skill-textmerger.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [How to map enriched fields](cognitive-search-output-field-mapping.md)
-+ [How to pass images to custom skills](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Image-Processing)
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
Code samples from the Cognitive Search team demonstrate features and workflows.
| [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md) | Synonym lists are used for query expansion, providing matchable terms that are external to an index. | | [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | [Tutorial: Index Azure SQL data](search-indexer-tutorial.md) | Shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. | | [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Shows how to create objects that are encrypted with a Customer Key. |
-| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index.
-| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. |
+| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index.
+| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. |
| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. | | [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, and other server-side behaviors.|
search Search Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-analyzers.md
private static void CreateIndex(string indexName, SearchIndexClient adminClient)
} ```
-For more examples, see [CustomAnalyzerTests.cs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Microsoft.Azure.Search/tests/Tests/CustomAnalyzerTests.cs).
- ## Next steps A detailed description of query execution can be found in [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md). The article uses examples to explain behaviors that might seem counter-intuitive on the surface.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 07/17/2023 Last updated : 08/09/2023
Maximum limits on storage, workloads, and quantities of indexes and other object
+ **Basic** provides dedicated computing resources for production workloads at a smaller scale, but shares some networking infrastructure with other tenants.
-+ **Standard** runs on dedicated machines with more storage and processing capacity at every level. Standard comes in four levels: S1, S2, S3, and S3 HD. S3 High Density (S3 HD) is engineered for [multi-tenancy](search-modeling-multitenant-saas-applications.md) and large quantities of small indexes (three thousand indexes per service). S3 HD doesn't provide the [indexer feature](search-indexer-overview.md) and data ingestion must leverage APIs that push data from source to index.
++ **Standard** runs on dedicated machines with more storage and processing capacity at every level. Standard comes in four levels: S1, S2, S3, and S3 HD. S3 High Density (S3 HD) is engineered for [multi-tenancy](search-modeling-multitenant-saas-applications.md) and large quantities of small indexes (three thousand indexes per service). S3 HD doesn't provide the [indexer feature](search-indexer-overview.md) and data ingestion must use APIs that push data from source to index. + **Storage Optimized** runs on dedicated machines with more total storage, storage bandwidth, and memory than **Standard**. This tier targets large, slow-changing indexes. Storage Optimized comes in two levels: L1 and L2.
Maximum limits on storage, workloads, and quantities of indexes and other object
<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
-You might find some variation in maximum limits if your service happens to be provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across equivalent service tiers in any region.
+You might find some variation in maximum limits if your service happens to be provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications are portable across equivalent service tiers in any region.
<a name="document-limits"></a>
When estimating document size, remember to consider only those fields that can b
## Vector index size limits
-When you index documents with vector fields, we construct internal vector indexes and use the algorithm parameters you provide. The size of these vector indexes is restricted by the memory reserved for vector search for your service's tier (or SKU).
+When you index documents with vector fields, we construct internal vector indexes using the algorithm parameters you provide. The size of these vector indexes is restricted by the memory reserved for vector search for your service's tier (or SKU).
The service enforces a vector index size quota **for every partition** in your search service. Each extra partition increases the available vector index size quota. This quota is a hard limit to ensure your service remains healthy, which means that further indexing attempts once the limit is exceeded results in failure. You may resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
-The table describes the vector index size quota per partition across the service tiers (or SKU). Use the [Get Service Statistics API (GET /servicestats)](/rest/api/searchservice/get-service-statistics) to retrieve your vector index size quota.
+The table describes the vector index size quota per partition across the service tiers (or SKU). For context, it includes the [storage limits](#storage-limits) for each tier. Use the [Get Service Statistics API (GET /servicestats)](/rest/api/searchservice/get-service-statistics) to retrieve your vector index size quota.
See our [documentation on vector index size](./vector-search-index-size.md) for more details.
-### Services created prior to July 1st, 2023
+### Services created prior to July 1, 2023
| Tier | Storage quota (GB) | Vector index size quota per partition (GB) | Approx. floats per partition (assuming 15% overhead) | | -- | | | - |
See our [documentation on vector index size](./vector-search-index-size.md) for
| L1 | 1,000 | 12 | 2,800 million | | L2 | 2,000 | 36 | 8,400 million |
-### Services created after July 1st, 2023 in supported regions
+### Services created after July 1, 2023 in supported regions
-Azure Cognitive Search is rolling out increased vector index size limits worldwide for **new search services**, but the team is building out infrastructure capacity in certain regions. Unfortunately, existing services cannot be migrated to the new limits.
+Azure Cognitive Search is rolling out increased vector index size limits worldwide for **new search services**, but the team is building out infrastructure capacity in certain regions. Unfortunately, existing services can't be migrated to the new limits.
The following regions **do not** support increased limits:
Maximum running times exist to provide balance and stability to the service as a
<sup>5</sup> AI enrichment and image analysis are computationally intensive and consume disproportionate amounts of available processing power. Running time for these workloads has been shortened to give other jobs in the queue more opportunity to run.
-<sup>6</sup> Indexer execution and indexer-skillset combined execution is subject to a 2-hour maximum duration. Currently, some indexers have a longer 24-hour maximum execution window, but that behavior isn't the norm. The longer window only applies if a service or its indexers can't be internally migrated to the newer runtime behavior. If more than 2 hours are needed to complete an indexer or indexer-skillset process, [schedule the indexer](search-howto-schedule-indexers.md) to run at 2-hour intervals.
+<sup>6</sup> Indexer execution and combined indexer-skillset execution is subject to a 2-hour maximum duration. Currently, some indexers have a longer 24-hour maximum execution window, but that behavior isn't the norm. The longer window only applies if a service or its indexers can't be internally migrated to the newer runtime behavior. If more than 2 hours are needed to complete an indexer or indexer-skillset process, [schedule the indexer](search-howto-schedule-indexers.md) to run at 2-hour intervals.
> [!NOTE] > As stated in the [Index limits](#index-limits), indexers will also enforce the upper limit of 3000 elements across all complex collections per document starting with the latest GA API version that supports complex types (`2019-05-06`) onwards. This means that if you've created your indexer with a prior API version, you will not be subject to this limit. To preserve maximum compatibility, an indexer that was created with a prior API version and then updated with an API version `2019-05-06` or later, will still be **excluded** from the limits. Customers should be aware of the adverse impact of having very large complex collections (as stated previously) and we highly recommend creating any new indexers with the latest GA API version.
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
Although Azure Cognitive Search has native [AI enrichment](cognitive-search-conc
You'll need the `synapseml` library and several Azure resources. If possible, use the same subscription and region for your Azure resources and put everything into one resource group for simple cleanup later. The following links are for portal installs. The sample data is imported from a public site.
-+ [SynapseML package](https://microsoft.github.io/SynapseML/docs/getting_started/installation/#python) <sup>1</sup>
++ [SynapseML package](https://microsoft.github.io/SynapseML/docs/Get%20Started/Install%20SynapseML/#python) <sup>1</sup> + [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>2</sup> + [Azure AI services](../ai-services/multi-service-resource.md?pivots=azportal) (any tier) <sup>3</sup> + [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>4</sup>
display(df2)
Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
-This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](../ai-services/document-intelligence/concept-invoice.md) of Azure AI Document Intelligence to extract information from the invoices.
+This code loads the [AnalyzeInvoices transformer](https://mmlspark.blob.core.windows.net/docs/0.11.2/pyspark/synapse.ml.cognitive.form.html#module-synapse.ml.cognitive.form.AnalyzeInvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](../ai-services/document-intelligence/concept-invoice.md) of Azure AI Document Intelligence to extract information from the invoices.
```python from synapse.ml.cognitive import AnalyzeInvoices
Notice how this transformation recasts the nested fields into a table, which ena
Paste the following code into the fifth cell. No modifications are required, so run the code when you're ready.
-This code loads [Translate](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#translate), a transformer that calls the Azure AI Translator service in Azure AI services. The original text, which is in English in the "Description" column, is machine-translated into various languages. All of the output is consolidated into "output.translations" array.
+This code loads [Translate](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/AI%20Services/Overview/#translator-sample), a transformer that calls the Azure AI Translator service in Azure AI services. The original text, which is in English in the "Description" column, is machine-translated into various languages. All of the output is consolidated into "output.translations" array.
```python from synapse.ml.cognitive import Translate
display(translated_df)
Paste the following code in the sixth cell and then run it. No modifications are required.
-This code loads [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index).
+This code loads [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/AI%20Services/Overview/#azure-cognitive-search-sample). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index).
+This code loads [AzureSearchWriter](). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index).
```python from synapse.ml.cognitive import *
You can find and manage resources in the portal, using the **All resources** or
## Next steps
-In this tutorial, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Document Intelligence transformers in SynapseML.
+In this tutorial, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/AI%20Services/Overview/#azure-cognitive-search-sample) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Document Intelligence transformers in SynapseML.
As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search:
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
This tutorial uses [Azure.Search.Documents](/dotnet/api/overview/azure/search) t
A finished version of the code in this tutorial can be found in the following project:
-* [multiple-data-sources/v11 (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/multiple-data-sources/v11)
+* [multiple-data-sources/v11 (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources/v11)
-For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/multiple-data-sources/v10) on GitHub.
+For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources/v10) on GitHub.
## Prerequisites
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index: *pushing* your data into the index programmatically, or pointing an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
-This tutorial describes how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/optimize-data-indexing). This article explains the key aspects of the application and factors to consider when indexing data.
+This tutorial describes how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing). This article explains the key aspects of the application and factors to consider when indexing data.
This tutorial uses C# and the [.NET SDK](/dotnet/api/overview/azure/search) to perform the following tasks:
The following services and tools are required for this tutorial.
## Download files
-Source code for this tutorial is in the [optimzize-data-indexing/v11](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/master/optimize-data-indexing/v11) folder in the [Azure-Samples/azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) GitHub repository.
+Source code for this tutorial is in the [optimzize-data-indexing/v11](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing/v11) folder in the [Azure-Samples/azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) GitHub repository.
## Key considerations
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
Previously updated : 07/31/2023 Last updated : 08/10/2023 # Add vector fields to a search index
Last updated 07/31/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
-In Azure Cognitive Search, vector data is indexed as *vector fields* within a [search index](search-what-is-an-index.md), using a *vector configuration* to create the embedding space.
+In Azure Cognitive Search, vector data is indexed as *vector fields* in a [search index](search-what-is-an-index.md), using a *vector configuration* to specify the embedding space.
-+ A vector field is of type `Collection(Edm.Single)` so that it can hold single-precision floating-point values. It also has a "dimensions" property and a "vectorConfiguration" property.
+Follow these steps to index vector data:
-+ A vector configuration specifies the algorithm and parameters used during indexing to create the proximity graph. Currently, only Hierarchical Navigable Small World (HNSW) is supported.
+> [!div class="checklist"]
+> + Add one or more vector fields to the index schema.
+> + Add one or more vector configurations.
+> + Load the index with vector data [as a separate step](#load-vector-data-for-indexing), after the index schema is defined.
-During indexing, HNSW determines how closely the vectors match and stores the neighborhood information among vectors in the index. You can have multiple configurations within an index if you want different HNSW parameter combinations. As long as the vector fields contain embeddings from the same model, having a different vector configuration per field has no effect on queries.
+Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
## Prerequisites
-+ Azure Cognitive Search, in any region and on any tier.
++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields fails on creation. In this situation, a new service must be created.
- Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields fails on creation. In this situation, a new service must be created.
++ Pre-existing vector embeddings in your source documents. Cognitive Search doesn't generate vectors. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization. For more information, see [Create and use embeddings for search queries and documents](vector-search-how-to-generate-embeddings.md).
-+ Pre-existing vector embeddings in your source documents. Cognitive Search doesn't generate vectors. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization.
-
-+ You should know the dimensions limit of the model used to create the embeddings and how similarity is computed. For **text-embedding-ada-002**, the length of the numerical vector is 1536. Similarity is computed using `cosine`.
-
-> [!NOTE]
-> During query execution, your workflow must call an embedding model that converts the user's query string into a vector. Be sure to use the same embedding model for both queries and indexing. For more information, see [Create and use embeddings for search queries and documents](vector-search-how-to-generate-embeddings.md).
++ You should know the dimensions limit of the model used to create the embeddings and how similarity is computed. In Azure OpenAI, for **text-embedding-ada-002**, the length of the numerical vector is 1536. Similarity is computed using `cosine`. ## Prepare documents for indexing
Prior to indexing, assemble a document payload that includes fields of vector an
Make sure your documents:
-1. Provide a field or a metadata property that uniquely identifies each document. All search indexes require a document key. To satisfy document key requirements, your documents must have one field or property that is unique in the index. This field must be mapped to type `Edm.String` and `key=true` in the search index.
+1. Provide a field or a metadata property that uniquely identifies each document. All search indexes require a document key. To satisfy document key requirements, a source document must have one field or property that can uniquely identify it in the index. This source field must be mapped to an index field of type `Edm.String` and `key=true` in the search index.
1. Provide vector data (an array of single-precision floating point numbers) in source fields.
A short example of a documents payload that includes vector and non-vector field
## Add a vector field to the fields collection
-The schema must include fields for the document key, vector fields, and any other fields that you require for hybrid search scenarios.
+The schema must include a `vectorConfiguration` section, a field for the document key, vector fields, and any other fields that you need for hybrid search scenarios.
+++ `vectorConfiguration` specifies the algorithm and parameters used during indexing to create "nearest neighbor" information among the vector nodes. Currently, only Hierarchical Navigable Small World (HNSW) is supported. +++ Vector fields are of type `Collection(Edm.Single)` and single-precision floating-point values. A field of this type also has a `dimensions` property and a `vectorConfiguration` property+
+During indexing, HNSW determines how closely the vectors match and stores the neighborhood information as a proximity graph in the index. You can have multiple configurations within an index if you want different HNSW parameter combinations. As long as the vector fields contain embeddings from the same model, having a different vector configuration per field has no effect on queries.
+
+You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs to index vectors.
### [**Azure portal**](#tab/portal-add-field)
-You can use the index designer in the Azure portal to add vector field definitions. If the index doesn't have a vector configuration, you're prompted to create one when you add your first vector field to the index.
+Use the index designer in the Azure portal to add vector field definitions. If the index doesn't have a vector configuration, you're prompted to create one when you add your first vector field to the index.
-Although you can add a field definition, there's no portal support for loading vectors into fields. Use the REST APIs or an SDK for data import.
+Although you can add a field to an index, there's no portal (Import data wizard) support for loading it with vector data. Instead, use the REST APIs or an SDK for data import.
1. [Sign in to Azure portal](https://portal.azure.com) and open your search service page in a browser.
Although you can add a field definition, there's no portal support for loading v
+ "efSearch default is 500. It's the number of nearest neighbors used during search. + "Similarity metric" should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric of the embedding model. Supported values are `cosine`, `dotProduct`, `euclidean`.
- If you're familiar with HNSW parameters, you might be wondering about "k" number of nearest neighbors to return in the result. In Cognitive Search, that value is set on the query request.
+ If you're familiar with HNSW parameters, you might be wondering about how to set the "k" number of nearest neighbors to return in the result. In Cognitive Search, that value is set on the [query request](vector-search-how-to-query.md).
1. Select **Save** to save the vector configuration and the field definition. ### [**REST API**](#tab/rest-add-field)
-In the following example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data.
+Use the **2023-07-01-Preview** REST API for vector scenarios. If you're updating an existing index to include vector fields, make sure the `allowIndexDowntime` query parameter is set to `true`.
-Updating an existing index with vector fields requires `allowIndexDowntime` query parameter to be `true`.
+In the following REST API example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data.
1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/preview-api/create-or-update-index) to create the index.
Updating an existing index with vector fields requires `allowIndexDowntime` quer
} ```
+### [**.NET**](#tab/dotnet-add-field)
+++ Use the [**Azure.Search.Documents 11.5.0-beta.4**](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) GitHub repository for .NET code samples.+
+### [**Python**](#tab/python-add-field)
+++ Use the [**Azure.Search.Documents 11.4.0b8**](https://pypi.org/project/azure-search-documents/11.4.0b8/) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) GitHub repository for Python code samples.+
+### [**JavaScript**](#tab/js-add-field)
+++ Use the [**@azure/search-documents 12.0.0-beta.2**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) GitHub repository for JavaScript code samples.+ ## Load vector data for indexing
-Content that you provide for indexing must conform to the index schema and include a unique string value for the document key. Vector data is loaded into one or more vector fields, which can coexist with other fields containing alphanumeric text.
+Content that you provide for indexing must conform to the index schema and include a unique string value for the document key. Vector data is loaded into one or more vector fields, which can coexist with other fields containing alphanumeric content.
-You can use either [push or pull methodologies](search-what-is-data-import.md) for data ingestion. You can't use the portal for this step.
+You can use either [push or pull methodologies](search-what-is-data-import.md) for data ingestion. You can't use the portal (Import data wizard) for this step.
### [**Push APIs**](#tab/push)
Data sources provide the vectors in whatever format the data source supports (su
For validation purposes, you can query the index using Search Explorer in Azure portal or a REST API call. Because Cognitive Search can't convert a vector to human-readable text, try to return fields from the same document that provide evidence of the match. For example, if the vector query targets the "titleVector" field, you could select "title" for the search results.
-### [**Azure portal**](#tab/portal-add-field)
+Fields must be attributed as "retrievable" to be included in the results.
-You can use [Search Explorer](search-explorer.md) to query an index that contains vector fields. However, the query string in Search Explorer is plain text and isn't converted to a vector, so you can't use Search Explorer to test vector queries, but you can verify that data import occurred and that vector fields are populated with the expected numeric values.
+### [**Azure portal**](#tab/portal-check-index)
-Fields must be attributed as "retrievable" to be included in the results.
+You can use [Search Explorer](search-explorer.md) to query an index. Search explorer has two views: Query view (default) and JSON view.
-You can issue an empty search (`search=*`) to return all fields, including vector fields. You can also `$select` specific fields for the result set.
++ [Use the JSON view for vector queries](vector-search-how-to-query.md), pasting in a JSON definition of the vector query you want to execute.
-### [**REST API**](#tab/rest-add-field)
++ Use the default Query view for a quick confirmation that the index contains vectors. The query view is for full text search. Although you can't use it for vector queries, you can send an empty search (`search=*`) to check for content. The content of all fields, including vector fields, is returned as plain text.+
+### [**REST API**](#tab/rest-check-index)
The following REST API example is a vector query, but it returns only non-vector fields (title, content, category). Only fields marked as "retrievable" can be returned in search results.
api-key: {{admin-api-key}}
As a next step, we recommend [Query vector data in a search index](vector-search-how-to-query.md).
-You might also consider reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) or [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet).
+You might also consider reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Previously updated : 07/31/2023 Last updated : 08/10/2023 # Query vector data in a search index
Last updated 07/31/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
-In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to query those fields. It also explains how to combine vector queries with full text search and semantic search for hybrid query combination scenarios.
+In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to:
-Query execution in Cognitive Search doesn't include vector conversion of the input string. Encoding (text-to-vector) of the query string requires that you pass the text to an embedding model for vectorization. You would then pass the output of the call to the embedding model to the search engine for similarity search over vector fields.
+> [!div class="checklist"]
+> + [Query vector fields](#query-syntax-for-vector-search).
+> + [Combine vector, full text search, and semantic search in a hybrid query](#query-syntax-for-hybrid-search).
+> + [Query multiple vector fields at once](#query-syntax-for-vector-query-over-multiple-fields).
+> + [Run multiple vector queries in parallel](#query-syntax-for-multiple-vector-queries).
-All results are returned in plain text, including vectors. If you use Search Explorer in the Azure portal to query an index that contains vectors, the numeric vectors are returned in plain text. Because numeric vectors aren't useful in search results, choose other fields in the index as a proxy for the vector match. For example, if an index has "descriptionVector" and "descriptionText" fields, the query can match on "descriptionVector" but the search result shows "descriptionText". Use the `select` parameter to specify only human-readable fields in the results.
+Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
## Prerequisites
All results are returned in plain text, including vectors. If you use Search Exp
+ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-query.md).
-+ Use REST API version 2023-07-01-preview or Azure portal to query vector fields. You can also use [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main).
++ Use REST API version **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal. + (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
+## Limitations
+
+Cognitive Search doesn't provide built-in vectorization of the query input string. Encoding (text-to-vector) of the query string requires that you pass the query string to an embedding model for vectorization. You would then pass the response to the search engine for similarity search over vector fields.
+
+All results are returned in plain text, including vectors. If you use Search Explorer in the Azure portal to query an index that contains vectors, the numeric vectors are returned in plain text. Because numeric vectors aren't useful in search results, choose other fields in the index as a proxy for the vector match. For example, if an index has "descriptionVector" and "descriptionText" fields, the query can match on "descriptionVector" but the search result shows "descriptionText". Use the `select` parameter to specify only human-readable fields in the results.
+ ## Check your index for vector fields If you aren't sure whether your search index already has vector fields, look for:
You can also send an empty query (`search=*`) against the index. If the vector f
To query a vector field, the query itself must be a vector. To convert a text query string provided by a user into a vector representation, your application must call an embedding library that provides this capability. Use the same embedding library that you used to generate embeddings in the source documents.
-Here's an example of a query string submitted to a deployment of an Azure OpenAI model:
+You can find multiple instances of query string conversion in the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/) repository for each of the Azure SDKs.
+
+Here's a REST API example of a query string submitted to a deployment of an Azure OpenAI model:
```http POST https://{{openai-service-name}}.openai.azure.com/openai/deployments/{{openai-deployment-name}}/embeddings?api-version={{openai-api-version}}
api-key: {{admin-api-key}}
} ```
-The expected response is 202 for a successful call to the deployed model. The body of the response provides the vector representation of the "input". The vector for the query is in the "embedding" field. For testing purposes, you would copy the value of the "embedding" array into "vector.value" in a query request, using syntax shown in the next several sections. The actual response for this call to the deployment model includes 1536 embeddings, trimmed here for brevity.
+The expected response is 202 for a successful call to the deployed model.
+The "embedding" field in the body of the response is the vector representation of the query string "input". For testing purposes, you would copy the value of the "embedding" array into "vector.value" in a query request, using syntax shown in the next several sections.
+
+The actual response for this POST call to the deployment model includes 1536 embeddings, trimmed here to just the first few vectors for readability.
```json {
The expected response is 202 for a successful call to the deployed model. The bo
## Query syntax for vector search
+You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs to query vectors.
+
+### [**Azure portal**](#tab/portal-vector-query)
+
+Be sure to the **JSON view** and formulate the query in JSON. The search bar in **Query view** is for full text search and will treat any vector input as plain text.
+
+1. Sign in to Azure portal and find your search service.
+
+1. Under **Search management** and **Indexes**, select the index.
+
+ :::image type="content" source="media/vector-search-how-to-query/select-index.png" alt-text="Screenshot of the indexes menu." border="true":::
+
+1. On Search Explorer, under **View**, select **JSON view**.
+
+ :::image type="content" source="media/vector-search-how-to-query/select-json-view.png" alt-text="Screenshot of the index list." border="true":::
+
+1. By default, the search API is **2023-07-01-Preview**. This is the correct API version for vector search.
+
+1. Paste in a JSON vector query, and then select **Search**. You can use the REST example as a template for your JSON query.
+
+ :::image type="content" source="media/vector-search-how-to-query/paste-vector-query.png" alt-text="Screenshot of the JSON query." border="true":::
+
+### [**REST API**](#tab/rest-vector-query)
+ In this vector query, which is shortened for brevity, the "value" contains the vectorized text of the query input. The "fields" property specifies which vector fields are searched. The "k" property specifies the number of nearest neighbors to return as top hits.
-The sample vector query for this article is: `"what Azure services support full text search"`. The query targets the "contentVector" field.
+In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query request targets the "contentVector" field. The actual vector has 1536 embeddings. It's trimmed in this example for readability.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
api-key: {{admin-api-key}}
The response includes 5 matches, and each result provides a search score, title, content, and category. In a similarity search, the response always includes "k" matches, even if the similarity is weak. For indexes that have fewer than "k" documents, only those number of documents will be returned.
-Notice that "select" returns textual fields from the index. Although the vector field is "retrievable" in this example, its content isn't usable as a search result.
+Notice that "select" returns textual fields from the index. Although the vector field is "retrievable" in this example, its content isn't usable as a search result, so it's not included in the results.
+
+### [**.NET**](#tab/dotnet-vector-query)
+++ Use the [**Azure.Search.Documents 11.5.0-beta.4**](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) GitHub repository for .NET code samples.+
+### [**Python**](#tab/python-vector-query)
+++ Use the [**Azure.Search.Documents 11.4.0b8**](https://pypi.org/project/azure-search-documents/11.4.0b8/) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) GitHub repository for Python code samples.+
+### [**JavaScript**](#tab/js-vector-query)
+++ Use the [**@azure/search-documents 12.0.0-beta.2**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) GitHub repository for JavaScript code samples.++ ## Query syntax for hybrid search
api-key: {{admin-api-key}}
## Query syntax for vector query over multiple fields
-You can set the "vectors.fields" property to multiple vector fields. For example, the Postman collection has vector fields named "titleVector" and "contentVector". Your vector query executes over both the "titleVector" and "contentVector" fields, which must have the same embedding space since they share the same query vector.
+You can set the "vectors.fields" property to multiple vector fields. For example, the Postman collection has vector fields named "titleVector" and "contentVector". A single vector query executes over both the "titleVector" and "contentVector" fields, which must have the same embedding space since they share the same query vector.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
Multiple sets are created if the query targets multiple vector fields, or if the
## Next steps
-As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), or [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet).
+As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
Previously updated : 07/07/2023 Last updated : 08/09/2023 # Vector index size limit
The service enforces a vector index size quota **based on the number of partitio
Each extra partition that you add to your service increases the available vector index size quota. This quota is a hard limit to ensure your service remains healthy. It also means that if vector size exceeds this limit, any further indexing requests will result in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
+The following table repurposes information from [Search service limits](search-limits-quotas-capacity.md). The limits are for newer search services.
+
+| Tier | Storage (GB) |Partitions | Vector quota per partition (GB) | Vector quota per service (GB) |
+| -- | - | -|-- | - |
+| Basic | 2 | 1 | 1 | 1 |
+| S1 | 25 | 12 | 3 | 36 |
+| S2 | 100 | 12 |12 | 144 |
+| S3 | 200 | 12 |36 | 432 |
+| L1 | 1,000 | 12 |12 | 144 |
+| L2 | 2,000 | 12 |36 | 432 |
+
+**Key points**:
+++ Storage quota is the physical storage available to the search service for all search data. Basic has one partition sized at 2 GB that must accommodate all of the data on the service. S1 can have 12 partitions sized at 25 GB each, for a maximum limit of 300 GB for all search data. +++ Vector quotas for are the vector indexes created for each vector field, and they're enforced at the partition level. On Basic, the sum total of all vector fields can't be more than 1 GB because Basic only has one partition. On S1, which can have up to 12 partitions, the quota for vector data is 3 GB if you've only allocated one partition, or up to 36 GB if you've allocated 12 partitions. For more information about partitions and replicas, see [Estimate and manage capacity](search-capacity-planning.md).+ ## How to get vector index size Use the preview REST APIs to return vector index size:
-+ [GET Index Statistics](/rest/api/searchservice/preview-api/get-index-statistics) returns quota and usage for a given index.
++ [GET Index Statistics](/rest/api/searchservice/preview-api/get-index-statistics) returns usage for a given index. + [GET Service Statistics](/rest/api/searchservice/preview-api/get-service-statistics) returns quota and usage for the search service all-up.
+For a visual, here's the sample response for a Basic search service that has the quickstart vector search index. `storageSize` and `vectorIndexSize` are reported in bytes. Notice that you'll need the preview API to return vector statistics.
+
+```json
+{
+ "@odata.context": "https://my-demo.search.windows.net/$metadata#Microsoft.Azure.Search.V2023_07_01_Preview.IndexStatistics",
+ "documentCount": 108,
+ "storageSize": 5853396,
+ "vectorIndexSize": 1342756
+}
+```
+
+Return service statistics to compare usage against available quota at the service level:
+
+```json
+{
+ "@odata.context": "https://my-demo.search.windows.net/$metadata#Microsoft.Azure.Search.V2023_07_01_Preview.ServiceStatistics",
+ "counters": {
+ "documentCount": {
+ "usage": 15377,
+ "quota": null
+ },
+ "indexesCount": {
+ "usage": 13,
+ "quota": 15
+ },
+ . . .
+ "storageSize": {
+ "usage": 39862913,
+ "quota": 2147483648
+ },
+ . . .
+ "vectorIndexSize": {
+ "usage": 2685436,
+ "quota": 1073741824
+ }
+ },
+ "limits": {
+ "maxFieldsPerIndex": 1000,
+ "maxFieldNestingDepthPerIndex": 10,
+ "maxComplexCollectionFieldsPerIndex": 40,
+ "maxComplexObjectsInCollectionsPerDocument": 3000
+ }
+}
+```
+ ## Factors affecting vector index size There are three major components that affect the size of your internal vector index:
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Previously updated : 07/28/2023 Last updated : 08/10/2023 # Vector search within Azure Cognitive Search
We recommend this article for background, but if you'd rather get started, follo
> [!div class="checklist"] > + [Generate vector embeddings](vector-search-how-to-generate-embeddings.md) before you start.
-> + [Add vector fields to an index](vector-search-how-to-create-index.md) using Azure portal or the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview).
+> + [Add vector fields to an index](vector-search-how-to-create-index.md).
> + [Load vector data](search-what-is-data-import.md) into an index using push or pull methodologies.
-> + [Query vector data](vector-search-how-to-query.md) using Azure portal or the preview REST APIs.
+> + [Query vector data](vector-search-how-to-query.md) using the Azure portal, preview REST APIs, or beta SDK packages.
+
+You could also begin with the [vector quickstart](search-get-started-vector.md) or the [code samples on GitHub](https://github.com/Azure/cognitive-search-vector-pr).
+
+Support for vector search is in public preview and available through the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview), Azure portal, and the more recent beta packages of the Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4), [Python](https://pypi.org/project/azure-search-documents/11.4.0b8/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2).
## What's vector search in Cognitive Search? Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401).
-Support for vector search is in public preview and available through the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview). To use vector search, define a *vector field* in the index definition and index documents with vector data. Then you can issue a search request with a query vector, returning documents with the requested `k` nearest neighbors (kNN) according to the selected vector similarity metric.
+The following diagram shows the indexing and query workflows for vector search.
++
+On the indexing side, prepare source documents that contain embeddings. Cognitive Search doesn't generate embeddings, so your solution should include calls to Azure OpenAI or other models that can transform image, audio, text, and other content into vector representations. Add a *vector field* to your index definition on Cognitive Search. Load the index with a documents payload that includes the vectors. Your index is now ready to query.
+
+On the query side, in your client application, collect the query input. Add a step that converts the input into a vector, and then send the vector query to your index on Cognitive Search for a similarity search. Cognitive Search returns documents with the requested `k` nearest neighbors (kNN) in the results.
-You can index vector data as fields in documents alongside textual and other types of content. Vector queries can be issued independently or in combination with other query types, including term queries (hybrid search) and filters in the same search request.
+You can index vector data as fields in documents alongside alphanumeric content. Vector queries can be issued singly or in combination with other query types, including term queries (hybrid search) and filters and semantic re-ranking in the same search request.
## Limitations
-Azure Cognitive Search doesn't generate vector embeddings for your content. You need to provide the embeddings yourself by using a service such as Azure OpenAI. See [How to generate embeddings](./vector-search-how-to-generate-embeddings.md) to learn more.
+Azure Cognitive Search doesn't generate vector embeddings for your content. You need to provide the embeddings yourself by using a solution like Azure OpenAI. See [How to generate embeddings](vector-search-how-to-generate-embeddings.md) to learn more.
-Vector search does not support customer-managed keys (CMK) at this time. This means you will not be able to add vector fields to an index with CMK enabled.
+Vector search doesn't support customer-managed keys (CMK) at this time. This means you won't be able to add vector fields to an index with CMK enabled.
## Availability and pricing
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
If there was an attack, you don't want the attacker to retain access at all. Mak
For more information, see: - [Revoke user access in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md)-- [Invoke-MgInvalidateUserRefreshToken Microsoft Graph PowerShell docs](/powershell/module/microsoft.graph.users.actions/invoke-mginvalidateuserrefreshtoken) ### Replace your ADFS servers
In addition to the recommended actions listed above, we recommend that you consi
For more information, see: - [Revoke user access in an emergency in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md)
- - [Invoke-MgInvalidateUserRefreshToken Microsoft Graph PowerShell docs](/powershell/module/microsoft.graph.users.actions/invoke-mginvalidateuserrefreshtoken)
## Next steps
sentinel Deploy Dynamics 365 Finance Operations Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dynamics-365/deploy-dynamics-365-finance-operations-solution.md
Before you begin, verify that:
:::image type="content" source="media/deploy-dynamics-365-finance-operations-solution/environment-version-information.png" alt-text="Screenshot of the Finance and Operations environment version information." lightbox="media/deploy-dynamics-365-finance-operations-solution/environment-version-information.png":::
-1. To collect your environment URL, select **Log on to environment** and save the URL in the browser to use [when you deploy the ARM template](#deploy-the-data-connector). For example: https://sentineldevc055b257489f70f5devaos.axcloud.dynamics.com.
+1. To collect your environment URL, select **Log on to environment** and save the URL in the browser to use [when you deploy the ARM template](#deploy-the-data-connector). For example: ``` https://sentineldevc055b257489f70f5devaos.axcloud.dynamics.com ```.
> [!NOTE] > The URL may look different, depending on the environment you use, for example, you could be using a sandbox, or a cloud hosted environment. Remove any trailing slashes: `/`.
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
Title: Collect SAP HANA audit logs in Microsoft Sentinel | Microsoft Docs description: This article explains how to collect audit logs from your SAP HANA database.--++ Last updated 05/24/2023
sentinel Configuration File Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configuration-file-reference.md
Title: Configuration file reference | Microsoft Docs description: Configuration file reference--++ Last updated 03/02/2022
sentinel Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit.md
Title: Enable and configure SAP auditing for Microsoft Sentinel | Microsoft Docs description: This article shows you how to enable and configure auditing for the Microsoft Sentinel solution for SAP® applications, so that you can have complete visibility into your SAP solution.--++ Last updated 04/27/2022
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
Title: Microsoft Sentinel solution for SAP® applications container kickstart deployment script reference | Microsoft Docs description: Description of command line options available with kickstart deployment script--++ Last updated 05/24/2023
sentinel Reference Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-update.md
Title: Microsoft Sentinel solution for SAP® applications container update script reference | Microsoft Docs description: Description of command line options available with update deployment script--++ Last updated 05/24/2023
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Title: Microsoft Sentinel solution for SAP® applications - data reference description: Learn about the SAP logs, tables, and functions available from the Microsoft Sentinel solution for SAP® applications.--++ Last updated 05/24/2023
sentinel Update Sap Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/update-sap-data-connector.md
Title: Update Microsoft Sentinel's SAP data connector agent description: This article shows you how to update an already existing SAP data connector to its latest version.--++ Last updated 12/31/2022
service-fabric Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/whats-new.md
+
+ Title: What's new for Service Fabric
+description: Learn about what's new for Service Fabric.
+++ Last updated : 08/09/2023+++
+# What's new for Service Fabric
+
+This article describes what's new for Service Fabric managed clusters and classic.
+
+For old and new Service Fabric runtime release announcements, see [Service Fabric releases](release-notes.md)
+
+## Service Fabric managed clusters
+
+Service Fabric managed clusters are improved on an ongoing basis. To stay up to date with the most recent developments, this article provides information about:
+* The latest releases
+* Known issues
+* Bug fixes
+* Retired functionality
+
+### July 2023
+
+Service Fabric managed clusters adds support for:
+* [Using NAT gateways to route network traffic](how-to-managed-cluster-nat-gateway.md)
+* [Enabling public IPv4 on secondary node types](how-to-managed-cluster-networking.md#enable-public-ip)
+* [Using public IP prefixes to reserve ranges of public IP addresses for your endpoints in Azure](how-to-managed-cluster-public-ip-prefix.md)
+
+### January 2023
+
+Service Fabric managed clusters adds support for:
+* Creating different subnets for each node type using custom virtual networks.
+
+## Next steps
+
+For updates and announcements about Azure, see the [Microsoft Azure Blog](https://azure.microsoft.com/blog/).
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
This table summarized support for the Azure VM OS disk, data disk, and temporary
**Component** | **Support** | **Details** | |
-Disk renaming | Supported | You can rename NICs in the target region based on your organization's naming conventions.
+Disk renaming | Supported |
OS disk maximum size | 4096 GB | [Learn more](../virtual-machines/managed-disks-overview.md) about VM disks. Temporary disk | Not supported | The temporary disk is always excluded from replication.<br/><br/> Don't store any persistent data on the temporary disk. [Learn more](../virtual-machines/managed-disks-overview.md). Data disk maximum size | 32 TB for managed disks<br></br>4 TB for unmanaged disks|
spring-apps Concept App Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-app-customer-responsibilities.md
+
+ Title: Version support for Java, Spring Boot, and more
+
+description: This article describes customer responsibilities developing Azure Spring Apps.
++++ Last updated : 08/10/2023++
+# Version support for Java, Spring Boot, and more
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
+
+This article describes the support policy for Java, Spring Boot, and Spring Cloud versions for all Azure Spring Apps plans, and versions of other SDKs and OS images for the Enterprise plan.
+
+Azure Spring Apps provides and maintains the SDKs and base OS images necessary to run your apps. To make sure your applications are compatible with such managed components, follow the version support policy for the components described in this article.
+
+## Version support for all plans
+
+The following sections describe the version support that applies to all plans.
+
+### Java runtime version
+
+You can choose any LTS Java version as the major version that's officially supported and receives regular updates.
+
+For more information, see [Java long-term support for Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure).
+
+### Spring Boot and Spring Cloud versions
+
+You can choose any version of Spring Boot or Spring Cloud that's compatible with the Java version you installed.
+
+For new versions, Azure Spring Apps supports the latest Spring Boot or Spring Cloud major version starting 30 days after its release. The latest minor version is supported as soon as it's released.
+
+For old versions, Azure Spring Apps doesn't require you to upgrade Spring Boot or Spring Cloud to receive support. However, with the officially supported new versions, you can get the best experience with some of the managed components - for example, Config Server and Eureka Server for the Standard consumption and dedicated plan and the Standard plan, [Tanzu components](vmware-tanzu-components.md) for the Enterprise plan, and metric collection for all plans.
+
+For more information, see the official support timeline of [Spring Boot](https://spring.io/projects/spring-boot#support) and [Spring Cloud](https://spring.io/projects/spring-cloud#overview). The Enterprise plan provides commercial support for Spring Boot, while the other plans provide OSS support.
+
+## Version support for the Enterprise plan
+
+The following sections describe the version support that applies to the Enterprise plan.
+
+### Polyglot SDKs
+
+You can deploy polyglot applications to the Enterprise plan with source code. To enjoy the best stability, use SDKs with LTS versions that are officially supported.
+
+When you deploy your polyglot applications to the Enterprise plan, assign specific LTS versions for the SDKs. Otherwise, the default SDK version might change during the regular upgrades for builder components. For more information about deploying polygot apps, see [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).
+
+| Type | Support policy |
+|--|-|
+| Java | [Java support on Azure](/azure/developer/java/fundamentals/java-support-on-azure) |
+| Tomcat | [Tomcat versions](https://tomcat.apache.org/whichversion.html) |
+| .NET | [.NET and .NET core support policy](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) |
+| Python | [Status of Python versions](https://devguide.python.org/versions/) |
+| Go | [Go release history](https://go.dev/doc/devel/release) |
+| NodeJS | [Nodejs releases](https://nodejs.dev/en/about/releases/) |
+| PHP | [PHP supported versions](https://www.php.net/supported-versions.php) |
+
+### Stack image support
+
+You can choose any stack image during builder configuration. We recommend using an LTS image that's officially supported. For more information, see [The Ubuntu lifecycle and release cadence](https://ubuntu.com/about/release-cycle#ubuntu).
+
+## Keep track of version upgrade
+
+Prepare early for the deprecation of any major component LTS version that your applications rely on. You'll receive notification from Microsoft one month prior to the end of support on Azure Spring Apps.
+
+For regular upgrades, you can find specific information in your activity log after the upgrade is complete.
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-config-server.md
The following table lists the configurable properties that you can use to set up
| `password` | No | The password or personal access token used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. | > [!NOTE]
-> Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps.
+> Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps. If you use such a token, remember to update it before it expires.
> > GitHub has removed support for password authentication, so you need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication requirements for Git operations](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
Use the steps in this section to enter repository information for a public or pr
:::image type="content" source="media/how-to-config-server/basic-auth.png" lightbox="media/how-to-config-server/basic-auth.png" alt-text="Screenshot of the Default repository section showing authentication settings for Basic authentication."::: > [!NOTE]
- > Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps.
+ > Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps. If you use such a token, remember to update it before it expires.
> > GitHub has removed support for password authentication, so you need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication requirements for Git operations](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
Title: How to configure VMware Spring Cloud Gateway with the Azure Spring Apps Enterprise plan
-description: Shows you how to configure VMware Spring Cloud Gateway with the Azure Spring Apps Enterprise plan.
+ Title: Configure VMware Spring Cloud Gateway
+description: Learn how to configure VMware Spring Cloud Gateway with the Azure Spring Apps Enterprise plan.
**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
-This article shows you how to configure Spring Cloud Gateway for VMware Tanzu with the Azure Spring Apps Enterprise plan.
+This article shows you how to configure VMware Spring Cloud Gateway for VMware Tanzu with the Azure Spring Apps Enterprise plan.
-[VMware Spring Cloud Gateway](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is a commercial VMware Tanzu component based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles the cross-cutting concerns for API development teams, such as single sign-on (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns using your choice of programming language for API development.
+[VMware Spring Cloud Gateway](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is a commercial VMware Tanzu component based on the open-source Spring Cloud Gateway project. VMware Spring Cloud Gateway for Tanzu handles the cross-cutting concerns for API development teams, such as single sign-on (SSO), access control, rate limiting, resiliency, and security. You can accelerate API delivery by using modern cloud-native patterns in your choice of programming language for API development.
-A Spring Cloud Gateway instance routes traffic according to rules. Both *scale in/out* and *up/down* are supported to meet a dynamic traffic load.
+A VMware Spring Cloud Gateway instance routes traffic according to rules. It supports scaling *in/out* and *up/down* to meet a dynamic traffic load.
VMware Spring Cloud Gateway includes the following features: -- Dynamic routing configuration, independent of individual applications, that you can apply and change without recompiling.-- Commercial API route filters for transporting authorized JSON Web Token (JWT) claims to application services.-- Client certificate authorization.-- Rate-limiting approaches.-- Circuit breaker configuration.-- Support for accessing application services via HTTP Basic Authentication credentials.
+- Dynamic routing configuration, independent of individual applications, that you can apply and change without recompiling
+- Commercial API route filters for transporting authorized JSON Web Token (JWT) claims to application services
+- Client certificate authorization
+- Rate-limiting approaches
+- Circuit breaker configuration
+- Support for accessing application services via HTTP Basic Authentication credentials
-To integrate with API portal for VMware Tanzu, VMware Spring Cloud Gateway automatically generates OpenAPI version 3 documentation after any route configuration additions or changes. For more information, see [Use API portal for VMware Tanzu®](./how-to-use-enterprise-api-portal.md).
+To integrate with API portal for VMware Tanzu, VMware Spring Cloud Gateway automatically generates OpenAPI version 3 documentation after any additions or changes to route configuration. For more information, see [Use API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md).
## Prerequisites
To integrate with API portal for VMware Tanzu, VMware Spring Cloud Gateway autom
- Azure CLI version 2.0.67 or later. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
-## Enable/disable Spring Cloud Gateway after service creation
+## Enable or disable VMware Spring Cloud Gateway
-You can enable and disable Spring Cloud Gateway after service creation using the Azure portal or Azure CLI. Before disabling Spring Cloud Gateway, you're required to unassign its endpoint and remove all route configs.
+You can enable or disable VMware Spring Cloud Gateway after creation of the service instance by using the Azure portal or the Azure CLI. Before you disable VMware Spring Cloud Gateway, you must unassign its endpoint and remove all route configurations.
### [Azure portal](#tab/Azure-portal)
-Use the following steps to enable or disable Spring Cloud Gateway using the Azure portal:
+Use the following steps to enable or disable VMware Spring Cloud Gateway by using the Azure portal:
-1. Navigate to your service resource, and then select **Spring Cloud Gateway**.
+1. Go to your service resource, and then select **Spring Cloud Gateway**.
1. Select **Manage**.
-1. Select or unselect the **Enable Spring Cloud Gateway**, and then select **Save**.
-1. You can now view the state of Spring Cloud Gateway on the **Spring Cloud Gateway** page.
+1. Select or clear the **Enable Spring Cloud Gateway** checkbox, and then select **Save**.
+You can now view the state of the Spring Cloud Gateway on the **Spring Cloud Gateway** page.
+ ### [Azure CLI](#tab/Azure-CLI)
-Use the following Azure CLI commands to enable or disable Spring Cloud Gateway:
+Use the following Azure CLI commands to enable or disable VMware Spring Cloud Gateway:
```azurecli az spring spring-cloud-gateway create \
az spring spring-cloud-gateway delete \
-## Restart Spring Cloud Gateway
+## Restart VMware Spring Cloud Gateway
-After the restart action, gateway instances are rolling restarted.
+After you complete the restart action, VMware Spring Cloud Gateway instances are restarted on a rolling basis.
### [Azure portal](#tab/Azure-portal)
-Use the following steps to restart Spring Cloud Gateway using the Azure portal:
+Use the following steps to restart VMware Spring Cloud Gateway by using the Azure portal:
-1. Navigate to your service resource, and then select **Spring Cloud Gateway**.
+1. Go to your service resource, and then select **Spring Cloud Gateway**.
1. Select **Restart**. 1. Select **OK** to confirm the restart. ### [Azure CLI](#tab/Azure-CLI)
az spring spring-cloud-gateway restart \
-## Configure Spring Cloud Gateway
+## Assign a public endpoint to VMware Spring Cloud Gateway
-This section describes how to assign a public endpoint to Spring Cloud Gateway and configure its properties.
+This section describes how to assign a public endpoint to VMware Spring Cloud Gateway and configure its properties.
#### [Azure portal](#tab/Azure-portal) To assign an endpoint in the Azure portal, use the following steps: 1. Open your Azure Spring Apps instance.
-1. Select **Spring Cloud Gateway** in the navigation pane, and then select **Overview**.
+1. Select **Spring Cloud Gateway** on the navigation pane, and then select **Overview**.
1. Set **Assign endpoint** to **Yes**. After a few minutes, **URL** shows the configured endpoint URL. Save the URL to use later. #### [Azure CLI](#tab/Azure-CLI)
-Use the following command to assign the endpoint.
+Use the following command to assign the endpoint:
```azurecli az spring gateway update \
az spring gateway update \
## Configure VMware Spring Cloud Gateway metadata
-You can configure VMware Spring Cloud Gateway metadata, which automatically generates OpenAPI version 3 documentation, to display route groups in API portal for VMware Tanzu. For more information, see [Use API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md).
+VMware Spring Cloud Gateway metadata automatically generates OpenAPI version 3 documentation. You can configure VMware Spring Cloud Gateway metadata to display route groups in API portal for VMware Tanzu. For more information, see [Use API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md).
The following table describes the available metadata options:
-| Property | Description |
-||--|
-| title | A title that describes the context of the APIs available on the Gateway instance. The default value is `Spring Cloud Gateway for K8S`. |
-| description | A detailed description of the APIs available on the Gateway instance. The default value is `Generated OpenAPI 3 document that describes the API routes configured for '[Gateway instance name]' Spring Cloud Gateway instance deployed under '[namespace]' namespace.*.` |
-| documentation | The location of API documentation that's available on the Gateway instance. |
-| version | The version of APIs available on this Gateway instance. The default value is `unspecified`. |
-| serverUrl | The base URL to access APIs on the Gateway instance. |
-
-> [!NOTE]
-> The `serverUrl` property is mandatory if you want to integrate with [API portal](./how-to-use-enterprise-api-portal.md).
+| Property | Description |
+|--|-|
+| `title` | A title that describes the context of the APIs available on the VMware Spring Cloud Gateway instance. The default value is `Spring Cloud Gateway for K8S`. |
+| `description` | A detailed description of the APIs available on the VMware Spring Cloud Gateway instance. The default value is `Generated OpenAPI 3 document that describes the API routes configured for '[Gateway instance name]' Spring Cloud Gateway instance deployed under '[namespace]' namespace.*.` |
+| `documentation` | The location of API documentation that's available on the VMware Spring Cloud Gateway instance. |
+| `version` | The version of APIs available on this VMware Spring Cloud Gateway instance. The default value is `unspecified`. |
+| `serverUrl` | The base URL to access APIs on the VMware Spring Cloud Gateway instance. This property is mandatory if you want to integrate with the [API portal](./how-to-use-enterprise-api-portal.md). |
-You can use the Azure portal and the Azure CLI to edit metadata properties.
+You can use the Azure portal or the Azure CLI to edit metadata properties.
#### [Azure portal](#tab/Azure-portal) To edit metadata in the Azure portal, use the following steps: 1. Open your Azure Spring Apps instance.
-1. Select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
+1. Select **Spring Cloud Gateway** on the navigation pane, and then select **Configuration**.
1. Specify values for the properties listed for **API**. 1. Select **Save**. #### [Azure CLI](#tab/Azure-CLI)
-Use the following command to configure VMware Spring Cloud Gateway metadata properties. You need the endpoint URL obtained from the [Configure Spring Cloud Gateway](#configure-spring-cloud-gateway) section.
+Use the following command to configure metadata properties for VMware Spring Cloud Gateway. You need the endpoint URL that you obtained when you completed the [Assign a public endpoint to VMware Spring Cloud Gateway](#assign-a-public-endpoint-to-vmware-spring-cloud-gateway) section.
```azurecli az spring gateway update \
az spring gateway update \
-## Configure single sign-on (SSO)
+## Configure single sign-on
-VMware Spring Cloud Gateway supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider, which supports the OpenID Connect Discovery protocol.
+VMware Spring Cloud Gateway supports authentication and authorization through single sign-on (SSO) with an OpenID identity provider. The provider supports the OpenID Connect Discovery protocol. The following table describes the SSO properties:
-| Property | Required? | Description |
-|-|--|-|
-| `issuerUri` | Yes | The URI that is asserted as its Issuer Identifier. For example, if the `issuer-uri` is `https://example.com`, then an OpenID Provider Configuration Request is made to `https://example.com/.well-known/openid-configuration`. The result is expected to be an OpenID Provider Configuration Response. |
-| `clientId` | Yes | The OpenID Connect client ID provided by your identity provider. |
-| `clientSecret` | Yes | The OpenID Connect client secret provided by your identity provider. |
-| `scope` | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider. |
+| Property | Required? | Description |
+|-|--|-|
+| `issuerUri` | Yes | The URI that's asserted as its issuer identifier. For example, if `issuer-uri` is `https://example.com`, an OpenID Provider Configuration Request is made to `https://example.com/.well-known/openid-configuration`. The result is expected to be an OpenID Provider Configuration Response. |
+| `clientId` | Yes | The OpenID Connect client ID from your identity provider. |
+| `clientSecret` | Yes | The OpenID Connect client secret from your identity provider. |
+| `scope` | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes that your identity provider allows. |
-To set up SSO with Azure AD, see [How to set up single sign-on with Azure Active Directory for Spring Cloud Gateway and API portal](./how-to-set-up-sso-with-azure-ad.md).
+To set up SSO with Azure Active Directory, see [Set up single sign-on using Azure Active Directory for Spring Cloud Gateway and API Portal](./how-to-set-up-sso-with-azure-ad.md).
-You can use the Azure portal and the Azure CLI to edit SSO properties.
+You can use the Azure portal or the Azure CLI to edit SSO properties.
#### [Azure portal](#tab/Azure-portal) To edit SSO properties in the Azure portal, use the following steps: 1. Open your Azure Spring Apps instance.
-1. Select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
+1. Select **Spring Cloud Gateway** on the navigation pane, and then select **Configuration**.
1. Specify values for the properties listed for **SSO**. 1. Select **Save**. #### [Azure CLI](#tab/Azure-CLI)
-Use the following command to configure SSO properties for VMware Spring Cloud Gateway.
+Use the following command to configure SSO properties for VMware Spring Cloud Gateway:
```azurecli az spring gateway update \
az spring gateway update \
-> [!NOTE]
-> VMware Spring Cloud Gateway supports only the authorization servers that support OpenID Connect Discovery protocol. Also, be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
->
-> If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration.
->
-> After configuring SSO, remember to set `ssoEnabled: true` for the Spring Cloud Gateway routes.
+VMware Spring Cloud Gateway supports only the authorization servers that support the OpenID Connect Discovery protocol. Also, be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
-## Configure single sign-on (SSO) logout
+If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and then add the correct configuration.
+
+After you configure SSO, remember to set `ssoEnabled: true` for the VMware Spring Cloud Gateway routes.
+
+## Configure SSO logout
VMware Spring Cloud Gateway service instances provide a default API endpoint to log out of the current SSO session. The path to this endpoint is `/scg-logout`. The logout results in one of the following outcomes, depending on how you call the logout endpoint: -- Logout of session and redirect to the identity provider (IdP) logout.-- Logout the service instance session.
+- Log out of the session and redirect to the identity provider (IdP) logout.
+- Log out of the service instance session.
-### Logout of IdP and SSO session
+### Log out of the IdP and SSO session
-If you send a `GET` request to the `/scg-logout` endpoint, then the endpoint sends a `302` redirect response to the IdP logout URL. To get the endpoint to return the user back to a path on the gateway service instance, add a redirect parameter to the `GET` request with the `/scg-logout` endpoint. For example, `${server-url}/scg-logout?redirect=/home`.
+If you send a `GET` request to the `/scg-logout` endpoint, the endpoint sends a `302` redirect response to the IdP logout URL. To get the endpoint to return the user to a path on the gateway service instance, add a redirect parameter to the `GET` request with the `/scg-logout` endpoint. For example, you can use `${server-url}/scg-logout?redirect=/home`.
-The following steps describe an example of how to implement the function in your micro
+The value of the redirect parameter must be a valid path on the VMware Spring Cloud Gateway service instance. You can't redirect to an external URL.
-1. Get a route config to route the logout request to your application. For example, see the Animal Rescue UI pages route config in the [animal-rescue](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/azure/api-route-config.json#L32) repository on GitHub.
+The following steps describe an example of how to implement the function in your micro
-1. Add whatever logout logic you need to the application. At the end, you need to a `GET` request to the gateway's `/scg-logout` endpoint as shown in the `return` value for the `getActionButton` method in the [animal-rescue](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/src/App.js#L84) repository.
+1. Get a route configuration to route the logout request to your application. For an example, see the route configuration in the [animal-rescue](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/azure/api-route-config.json#L32) repository on GitHub.
-> [!NOTE]
-> The value of the redirect parameter must be a valid path on the gateway service instance. You can't redirect to an external URL.
+1. Add whatever logout logic you need to the application. At the end, you need a `GET` request to the gateway's `/scg-logout` endpoint, as shown in the `return` value for the `getActionButton` method in the [animal-rescue](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/src/App.js#L84) repository.
-### Log out just the SSO session
+### Log out of just the SSO session
-If you send the `GET` request to the `/scg-logout` endpoint using a `XMLHttpRequest` (XHR), then the `302` redirect could be swallowed and not handled in the response handler. In this case, the user would only be logged out of the SSO session on the gateway service instance and would still have a valid IdP session. The behavior typically seen is that if the user attempts to log in again, they're automatically sent back to the gateway as authenticated from IdP.
+If you send the `GET` request to the `/scg-logout` endpoint by using `XMLHttpRequest`, the `302` redirect could be swallowed and not handled in the response handler. In this case, the user would only be logged out of the SSO session on the VMware Spring Cloud Gateway service instance. The user would still have a valid IdP session. Typically, if the user tries to log in again, they're automatically sent back to the gateway as authenticated from IdP.
You need to have a route configuration to route the logout request to your application, as shown in the following example. This code makes a gateway-only logout SSO session.
req.open("GET", "/scg-logout);
req.send(); ```
-## Configure cross-origin resource sharing (CORS)
+## Configure cross-origin resource sharing
-Cross-origin resource sharing (CORS) allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served. The available CORS configuration options are described in the following table.
+Cross-origin resource sharing (CORS) allows restricted resources on a webpage to be requested from another domain outside the domain from which the first resource was served. The following table describes the available CORS configuration options.
-| Property | Description |
-|-|-|
-| `allowedOrigins` | Allowed origins to make cross-site requests. |
-| `allowedOriginPatterns` | Allowed origin patterns to make cross-site requests. |
-| `allowedMethods` | Allowed HTTP methods on cross-site requests. |
-| `allowedHeaders` | Allowed headers in cross-site request. |
-| `maxAge` | How long, in seconds, clients cache the response from a preflight request. |
-| `allowCredentials` | Whether user credentials are supported on cross-site requests. |
-| `exposedHeaders` | HTTP response headers to expose for cross-site requests. |
+| Property | Description |
+|-||
+| `allowedOrigins` | Allowed origins to make cross-site requests |
+| `allowedOriginPatterns` | Allowed origin patterns to make cross-site requests |
+| `allowedMethods` | Allowed HTTP methods on cross-site requests |
+| `allowedHeaders` | Allowed headers in cross-site requests |
+| `maxAge` | How long, in seconds, clients cache the response from a preflight request |
+| `allowCredentials` | Whether user credentials are supported on cross-site requests |
+| `exposedHeaders` | HTTP response headers to expose for cross-site requests |
-> [!NOTE]
-> Be sure you have the correct CORS configuration if you want to integrate with API portal. For more information, see the [Configure Spring Cloud Gateway](#configure-spring-cloud-gateway) section.
+Be sure that you have the correct CORS configuration if you want to integrate with the API portal. For more information, see the [Assign a public endpoint to VMware Spring Cloud Gateway](#assign-a-public-endpoint-to-vmware-spring-cloud-gateway) section.
## Use service scaling
-You can customize resource allocation for Spring Cloud Gateway instances, including vCpu, memory, and instance count.
+You can customize resource allocation for VMware Spring Cloud Gateway instances, including vCPU, memory, and instance count.
-> [!NOTE]
-> For high availability, a single replica is not recommended.
+For high availability, we don't recommend using a single replica.
The following table describes the default resource usage. | Component name | Instance count | vCPU per instance | Memory per instance | |-|-|-||
-| VMware Spring Cloud Gateway | 2 | 1 core | 2Gi |
-| VMware Spring Cloud Gateway operator | 2 | 1 core | 2Gi |
+| VMware Spring Cloud Gateway | 2 | 1 core | 2 GiB |
+| VMware Spring Cloud Gateway operator | 2 | 1 core | 2 GiB |
-## Configure TLS between gateway and applications
+## Configure TLS between the gateway and applications
-To enhance security and protect sensitive information from interception by unauthorized parties, you can enable Transport Layer Security (TLS) between Spring Cloud Gateway and your applications. This section explains how to configure TLS between a gateway and applications.
+To enhance security and help protect sensitive information from interception by unauthorized parties, you can enable Transport Layer Security (TLS) between VMware Spring Cloud Gateway and your applications.
-Before configuring TLS, you need to have a TLS-enabled application and a TLS certificate. To prepare a TLS certificate, generate a certificate from a trusted certificate authority (CA). The certificate verifies the identity of the server and establishes a secure connection.
+Before you configure TLS, you need to have a TLS-enabled application and a TLS certificate. To prepare a TLS certificate, generate a certificate from a trusted certificate authority (CA). The certificate verifies the identity of the server and establishes a secure connection.
After you have a TLS-enabled application running in Azure Spring Apps, upload the certificate to Azure Spring Apps. For more information, see the [Import a certificate](how-to-use-tls-certificate.md#import-a-certificate) section of [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md).
-With the certificate updated to Azure Spring Apps, you can now configure the TLS certificate for the gateway and enable certificate verification. You can configure the certification in the Azure portal or by using the Azure CLI.
+With the certificate updated to Azure Spring Apps, you can configure the TLS certificate for the gateway and enable certificate verification. You can configure the certificate in the Azure portal or by using the Azure CLI.
#### [Azure portal](#tab/Azure-portal) Use the following steps to configure the certificate in the Azure portal:
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane.
1. On the **Spring Cloud Gateway** page, select **Certificate management**. 1. Select **Enable cert verification**. 1. Select the TLS certificate in **Certificates**.
Updating the configuration can take a few minutes. You should get a notification
#### [Azure CLI](#tab/Azure-CLI)
-Use the following command to enable or disable certificate verification using the Azure CLI. Be sure to replace the *`<value>`* placeholder with *true* to enable or *false* to disable verification.
+Use the following command to enable or disable certificate verification by using the Azure CLI. Replace the `<value>` placeholder with `true` to enable or `false` to disable verification.
```azurecli az spring gateway update \
az spring gateway update \
### Prepare the route configuration
-You must specify the protocol as HTTPS in the route configuration. The following JSON object instructs the gateway to use the HTTPS protocol for all traffic between the gateway and the app.
+You must specify the protocol as HTTPS in the route configuration. The following JSON object instructs VMware Spring Cloud Gateway to use the HTTPS protocol for all traffic between the gateway and the app.
-1. Create a file named *test-tls-route.json* with the following content.
+1. Create a file named *test-tls-route.json* with the following content:
```json {
You can now test whether the application is TLS enabled with the endpoint of the
### Rotate certificates
-As certificates expire, you need to rotate certificates in Spring Cloud Gateway by using the following steps:
+As certificates expire, you need to rotate certificates in VMware Spring Cloud Gateway by using the following steps:
1. Generate new certificates from a trusted CA. 1. Import the certificates into Azure Spring Apps. For more information, see the [Import a certificate](how-to-use-tls-certificate.md#import-a-certificate) section of [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md).
-1. Synchronize the certificates, using the Azure portal or the Azure CLI.
+1. Synchronize the certificates by using the Azure portal or the Azure CLI.
-The gateway restarts accordingly to ensure that the gateway uses the new certificate for all connections.
+VMware Spring Cloud Gateway restarts to ensure that the gateway uses the new certificate for all connections.
#### [Azure portal](#tab/Azure-portal)
-Use the following steps to synchronize certificates.
+Use the following steps to synchronize certificates:
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane.
1. On the **Spring Cloud Gateway** page, select **Restart**, and then confirm the operation. #### [Azure CLI](#tab/Azure-CLI)
-Use the following restart command to synchronize a certificate for Spring Cloud Gateway.
+Use the following restart command to synchronize a certificate for VMware Spring Cloud Gateway:
```azurecli az spring gateway restart \
az spring gateway restart \
-### Set up autoscale settings for VMware Spring Cloud Gateway in Azure CLI
+### Set up autoscale settings
-You can set autoscale modes using the Azure CLI. The following commands create an autoscale setting and an autoscale rule.
+You can set autoscale modes for VMware Spring Cloud Gateway by using the Azure CLI.
-- Use the following command to create an autoscale setting:
+Use the following command to create an autoscale setting:
- ```azurecli
- az monitor autoscale create \
- --resource-group <resource-group-name> \
- --name <autoscale-setting-name> \
- --resource /subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.AppPlatform/Spring/<service-instance-name>/gateways/default \
- --min-count 1 \
- --max-count 5 \
- --count 1
- ```
+```azurecli
+az monitor autoscale create \
+ --resource-group <resource-group-name> \
+ --name <autoscale-setting-name> \
+ --resource /subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.AppPlatform/Spring/<service-instance-name>/gateways/default \
+ --min-count 1 \
+ --max-count 5 \
+ --count 1
+```
-- Use the following command to create an autoscale rule:
+Use the following command to create an autoscale rule:
- ```azurecli
- az monitor autoscale rule create \
- --resource-group <resource-group-name> \
- --autoscale-name <autoscale-setting-name> \
- --scale out 1 \
- --cooldown 1 \
- --condition "GatewayHttpServerRequestsSecondsCount > 100 avg 1m"
- ```
+```azurecli
+az monitor autoscale rule create \
+ --resource-group <resource-group-name> \
+ --autoscale-name <autoscale-setting-name> \
+ --scale out 1 \
+ --cooldown 1 \
+ --condition "GatewayHttpServerRequestsSecondsCount > 100 avg 1m"
+```
For information on the available metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Apps](./concept-metrics.md).
For information on the available metrics, see the [User metrics options](./conce
## Configure environment variables
-The Azure Spring Apps service manages and tunes Spring Cloud Gateway. Except for the use cases that configure application performance monitoring and the log level, you don't normally need to configure it with environment variables. But if you do have requirements that you can't fulfill by other configurations described in this article, you can try to configure the environment variables shown in the [Common application properties](https://cloud.spring.io/spring-cloud-gateway/reference/html/appendix.html#common-application-properties) list. Be sure to verify your configuration in your test environment before applying it to your production environment.
+The Azure Spring Apps service manages and tunes VMware Spring Cloud Gateway. Except for the use cases that configure application performance monitoring (APM) and the log level, you don't normally need to configure VMware Spring Cloud Gateway with environment variables.
+
+If you have requirements that you can't fulfill by other configurations described in this article, you can try to configure the environment variables shown in the [Common application properties](https://cloud.spring.io/spring-cloud-gateway/reference/html/appendix.html#common-application-properties) list. Be sure to verify your configuration in your test environment before applying it to your production environment.
#### [Azure portal](#tab/Azure-portal) To configure environment variables in the Azure portal, use the following steps:
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
-1. Fill in the key-value pairs for the environment variables in the **Properties** or **Secrets** sections. You can include variables with sensitive information in the **Secrets** section.
-1. When you've provided all the configurations, select **Save** to save your changes.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane, and then select **Configuration**.
+1. Fill in the key/value pairs for the environment variables in the **Properties** and **Secrets** sections. You can include variables with sensitive information in the **Secrets** section.
+1. Select **Save** to save your changes.
#### [Azure CLI](#tab/Azure-CLI)
-Use the following command to configure environment variables using the Azure CLI. You can include variables with sensitive information by using the `--secrets` parameter.
+Use the following command to configure environment variables by using the Azure CLI. You can include variables with sensitive information by using the `--secrets` parameter.
```azurecli az spring gateway update \
az spring gateway update \
-> [!NOTE]
-> When the environment variables are updated, Spring Cloud Gateway is restarted.
+After you update environment variables, VMware Spring Cloud Gateway restarts.
### Configure application performance monitoring
-To monitor Spring Cloud Gateway, you can configure application performance monitoring (APM). The following table lists the five types of APM Java agents provided by Spring Cloud Gateway and their required environment variables.
+To monitor VMware Spring Cloud Gateway, you can configure APM. The following table lists the five types of APM Java agents that VMware Spring Cloud Gateway provides, along with their required environment variables.
-| Java Agent | Required environment variables |
+| Java agent | Required environment variables |
|-|| | Application Insights | `APPLICATIONINSIGHTS_CONNECTION_STRING` | | Dynatrace | `DT_TENANT`<br>`DT_TENANTTOKEN`<br>`DT_CONNECTION_POINT` | | New Relic | `NEW_RELIC_LICENSE_KEY`<br>`NEW_RELIC_APP_NAME` | | AppDynamics | `APPDYNAMICS_AGENT_APPLICATION_NAME`<br>`APPDYNAMICS_AGENT_TIER_NAME`<br>`APPDYNAMICS_AGENT_NODE_NAME`<br> `APPDYNAMICS_AGENT_ACCOUNT_NAME`<br>`APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY`<br>`APPDYNAMICS_CONTROLLER_HOST_NAME`<br>`APPDYNAMICS_CONTROLLER_SSL_ENABLED`<br>`APPDYNAMICS_CONTROLLER_PORT` |
-| ElasticAPM | `ELASTIC_APM_SERVICE_NAME`<br>`ELASTIC_APM_APPLICATION_PACKAGES`<br>`ELASTIC_APM_SERVER_URL` |
+| Elastic APM | `ELASTIC_APM_SERVICE_NAME`<br>`ELASTIC_APM_APPLICATION_PACKAGES`<br>`ELASTIC_APM_SERVER_URL` |
For other supported environment variables, see the following sources: -- [Application Insights public document](../azure-monitor/app/app-insights-overview.md?tabs=net)-- [Dynatrace Environment Variables](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/microsoft-azure-services/azure-integrations/azure-spring#envvar)-- [New Relic Environment Variables](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables)-- [AppDynamics Environment Variables](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)-- [Elastic Environment Variables](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html).
+- [Application Insights overview](../azure-monitor/app/app-insights-overview.md?tabs=net)
+- [Dynatrace environment variables](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/microsoft-azure-services/azure-integrations/azure-spring#envvar)
+- [New Relic environment variables](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables)
+- [AppDynamics environment variables](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)
+- [Elastic environment variables](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html)
-#### Manage APM in Spring Cloud Gateway
+#### Manage APM in VMware Spring Cloud Gateway
-You can use the Azure portal or the Azure CLI to set up application performance monitoring (APM) in Spring Cloud Gateway. You can also specify the types of APM Java agents to use and the corresponding APM environment variables they support.
+You can use the Azure portal or the Azure CLI to set up APM in VMware Spring Cloud Gateway. You can also specify the types of APM Java agents to use and the corresponding APM environment variables that they support.
##### [Azure portal](#tab/Azure-portal)
-Use the following steps to set up APM using the Azure portal:
+Use the following steps to set up APM by using the Azure portal:
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation page and then select **Configuration**.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane, and then select **Configuration**.
1. Choose the APM type in the **APM** list to monitor a gateway.
-1. Fill in the key-value pairs for the APM environment variables in the **Properties** or **Secrets** sections. You can put variables with sensitive information in **Secrets**.
-1. When you've provided all the configurations, select **Save** to save your changes.
+1. Fill in the key/value pairs for the APM environment variables in the **Properties** and **Secrets** sections. You can put variables with sensitive information in **Secrets**.
+1. Select **Save** to save your changes.
Updating the configuration can take a few minutes. You should get a notification when the configuration is complete. ##### [Azure CLI](#tab/Azure-CLI)
-Use the following command to set up APM using Azure CLI:
+Use the following command to set up APM by using the Azure CLI:
```azurecli az spring gateway update \
az spring gateway update \
--secrets <key=value> ```
-The allowed values for `--apm-types` are `ApplicationInsights`, `AppDynamics`, `Dynatrace`, `NewRelic`, and `ElasticAPM`. The following command shows the usage using Application Insights as an example.
+The allowed values for `--apm-types` are `ApplicationInsights`, `AppDynamics`, `Dynatrace`, `NewRelic`, and `ElasticAPM`. The following command shows the usage for Application Insights as an example:
```azurecli az spring gateway update \
az spring gateway update \
--properties APPLICATIONINSIGHTS_CONNECTION_STRING=<your-Application-Insights-connection-string> APPLICATIONINSIGHTS_SAMPLE_RATE=10 ```
-You can also put environment variables in the `--secrets` parameter instead of `--properties`, which makes the environment variable more secure in network transmission and data storage in the backend.
+You can also put environment variables in the `--secrets` parameter instead of `--properties`. It makes the environment variable more secure in network transmission and data storage in the back end.
> [!NOTE]
-> Azure Spring Apps upgrades the APM agent and deployed apps with the same cadence to keep compatibility of agents between Spring Cloud Gateway and Spring apps.
+> Azure Spring Apps upgrades the APM agent and deployed apps with the same cadence to keep compatibility of agents between VMware Spring Cloud Gateway and Azure Spring Apps.
+>
+> By default, Azure Spring Apps prints the logs of the APM Java agent to `STDOUT`. These logs are included with the VMware Spring Cloud Gateway logs. You can check the version of the APM agent used in the logs. You can query these logs in Log Analytics to troubleshoot.
>
-> By default, Azure Spring Apps prints the logs of the APM Java agent to `STDOUT`. These logs are included with the Spring Cloud Gateway logs. You can check the version of the APM agent used in the logs. You can query these logs in Log Analytics to troubleshoot.
-> To make the APM agents work correctly, increase the CPU and memory of Spring Cloud Gateway.
+> To make the APM agents work correctly, increase the CPU and memory of VMware Spring Cloud Gateway.
### Configure log levels
-You can configure the log levels of Spring Cloud Gateway in the following ways to get more details or to reduce logs:
+You can configure the log levels of VMware Spring Cloud Gateway in the following ways to get more details or to reduce logs:
-- The default log level for Spring Cloud Gateway is `INFO`.-- You can set log levels to `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `OFF`.
+- You can set log levels to `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, or `OFF`. The default log level for VMware Spring Cloud Gateway is `INFO`.
- You can turn off logs by setting log levels to `OFF`.-- When log levels are set to `WARN`, `ERROR`, `OFF`, you may be required to adjust it to `INFO` when requesting support from the Azure Spring Apps team. This change causes a restart of Spring Cloud Gateway.-- When log levels are set to `TRACE` or `DEBUG`, it may affect the performance of Spring Cloud Gateway. Try avoid these settings in your production environment.-- You can set log levels for the `root` logger or specific loggers like `io.pivotal.spring.cloud.gateway`.
+- When the log level is set to `WARN`, `ERROR`, or `OFF`, you might be required to adjust it to `INFO` when requesting support from the Azure Spring Apps team. This change causes a restart of VMware Spring Cloud Gateway.
+- When the log level is set to `TRACE` or `DEBUG`, it might affect the performance of VMware Spring Cloud Gateway. Try avoid these settings in your production environment.
+- You can set log levels for the `root` logger or for specific loggers like `io.pivotal.spring.cloud.gateway`.
-The following loggers may contain valuable troubleshooting information at the `TRACE` and `DEBUG` levels:
+The following loggers might contain valuable troubleshooting information at the `TRACE` and `DEBUG` levels:
-| Logger | Description |
-|-||
-| `io.pivotal.spring.cloud.gateway` | Filters and predicates, including custom extensions. |
-| `org.springframework.cloud.gateway` | API gateway. |
-| `org.springframework.http.server.reactive` | HTTP server interactions. |
-| `org.springframework.web.reactive` | API gateway reactive flows. |
-| `org.springframework.boot.autoconfigure.web` | API gateway autoconfiguration. |
-| `org.springframework.security.web` | Authentication and Authorization information. |
-| `reactor.netty` | Reactor Netty. |
+| Logger | Description |
+|-|--|
+| `io.pivotal.spring.cloud.gateway` | Filters and predicates, including custom extensions |
+| `org.springframework.cloud.gateway` | API gateway |
+| `org.springframework.http.server.reactive` | HTTP server interactions |
+| `org.springframework.web.reactive` | API gateway reactive flows |
+| `org.springframework.boot.autoconfigure.web` | API gateway autoconfiguration |
+| `org.springframework.security.web` | Authentication and authorization information |
+| `reactor.netty` | Reactor Netty |
-To get environment variable keys, add the `logging.level.` prefix, and then set the log level by configuring environment `logging.level.{loggerName}={logLevel}`. Examples with Azure portal and Azure CLI:
+To get environment variable keys, add the `logging.level.` prefix, and then set the log level by configuring environment `logging.level.{loggerName}={logLevel}`. The following examples show the steps for the Azure portal and the Azure CLI.
#### [Azure portal](#tab/Azure-portal) To configure log levels in the Azure portal, use the following steps:
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
-1. Fill in the key-value pairs for the log level environment variables in the **Properties** or **Secrets** sections. If the log level is sensitive information in your case, you can include it using the **Secrets** section.
-1. When you've provided all the configurations, select **Save** to save your changes.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane, and then select **Configuration**.
+1. Fill in the key/value pairs for the log levels' environment variables in the **Properties** and **Secrets** sections. If the log level is sensitive information in your case, you can include it by using the **Secrets** section.
+1. Select **Save** to save your changes.
#### [Azure CLI](#tab/Azure-CLI)
-For a general CLI command for specifying environment variables, see the [Configure environment variables](#configure-environment-variables) section. The following example shows you how to configure log levels using the Azure CLI:
+For a general CLI command for specifying environment variables, see the [Configure environment variables](#configure-environment-variables) section. The following example shows you how to configure log levels by using the Azure CLI:
```azurecli az spring gateway update \
If the log level is sensitive information in your case, you can include it by us
-## Configure addon configuration
+## Update add-on configuration
-The addon configuration feature enables you to customize certain properties of Spring Cloud Gateway using a JSON format string. The feature is useful when you need to configure properties that aren't exposed through the REST API.
+The add-on configuration feature enables you to customize certain properties of VMware Spring Cloud Gateway by using a JSON format string. The feature is useful when you need to configure properties that aren't exposed through the REST API.
-The addon configuration is a JSON object with key-value pairs representing the desired configuration. The following example shows the structure of the JSON format:
+The add-on configuration is a JSON object with key/value pairs that represent the desired configuration. The following example shows the structure of the JSON format:
```json {
The addon configuration is a JSON object with key-value pairs representing the d
} ```
-The following list shows the supported addon configurations for the addon key names and value types. This list is subject to change as we upgrade the Spring Cloud Gateway version.
+The following list shows the supported add-on configurations for the add-on key names and value types. This list is subject to change as we upgrade the VMware Spring Cloud Gateway version.
-- Single sign-on (SSO) configuration
+- SSO configuration:
- Key name: `sso` - Value type: Object - Properties:
- - `RolesAttributeName` (String): Specifies the name of the attribute that contains the roles associated with the single sign-on session.
- - `InactiveSessionExpirationInMinutes` (Integer): Specifies the expiration time in minutes for inactive single sign-on sessions. A value of *0* means never expire.
+ - `RolesAttributeName` (String): Specifies the name of the attribute that contains the roles associated with the SSO session.
+ - `InactiveSessionExpirationInMinutes` (Integer): Specifies the expiration time, in minutes, for inactive SSO sessions. A value of `0` means the session never expires.
- Example: ```json
The following list shows the supported addon configurations for the addon key na
} ``` -- Metadata configuration
+- Metadata configuration:
- Key name: `api` - Value type: Object - Properties
- - `groupId` (String): A unique identifier for the group of APIs available on the Gateway instance. The value can only contain lowercase letters and numbers.
+ - `groupId` (String): A unique identifier for the group of APIs available on the VMware Spring Cloud Gateway instance. The value can contain only lowercase letters and numbers.
- Example: ```json
The following list shows the supported addon configurations for the addon key na
} ```
-Use the following steps to update the addon configuration.
+Use the following steps to update the add-on configuration.
### [Azure portal](#tab/Azure-portal)
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane, and then select **Configuration**.
1. Specify the JSON value for **Addon Configs**. 1. Select **Save**. ### [Azure CLI](#tab/Azure-CLI)
-1. Prepare the addon configs JSON file *\<file-name-of-addon-configs-json\>.json* with the following content:
+1. Prepare the JSON file for add-on configurations (*\<file-name-of-addon-configs-json\>.json*) with the following content:
```json {
Use the following steps to update the addon configuration.
} ```
-1. Use the following command to update the addon configs for Spring Cloud Gateway:
+1. Use the following command to update the add-on configurations for VMware Spring Cloud Gateway:
```azurecli az spring gateway update \
Use the following steps to update the addon configuration.
## Next steps -- [How to Use Spring Cloud Gateway](how-to-use-enterprise-spring-cloud-gateway.md)
+- [Use Spring Cloud Gateway](how-to-use-enterprise-spring-cloud-gateway.md)
spring-apps How To Enterprise Deploy App At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-app-at-scale.md
Spring Cloud Gateway for VMware Tanzu handles the cross-cutting concerns for API
The performance of the gateway is closely related to the number of routes. In general, we recommend that you don't exceed 500 routes. When the gateway can't handle some requests with reasonably low latency or without errors, it can stress a gateway instance's performance.
-Spring Cloud Gateway is able to handle a high volume of traffic. To support the traffic, you should consider increasing the memory requested for API gateway instances so that each pod can handle more requests per second. To configure autoscale rules for the gateway to perform best when demand changes, see the [Set up autoscale settings for VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md#set-up-autoscale-settings-for-vmware-spring-cloud-gateway-in-azure-cli) section in [Configure VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md).
+Spring Cloud Gateway is able to handle a high volume of traffic. To support the traffic, you should consider increasing the memory requested for API gateway instances so that each pod can handle more requests per second. To configure autoscale rules for the gateway to perform best when demand changes, see the [Set up autoscale settings for VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md#set-up-autoscale-settings) section in [Configure VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md).
Spring Cloud Gateway supports rolling restarts to ensure zero downtime and disruption. However, the current version of the gateway has a limitation that when it's rolling restarted, it may take longer to synchronize a large number of routes. This situation can cause incomplete route updates during the process. We're actively working on fixing this limitation and will provide an update through our documentation.
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
The following list shows the errors you might encounter when you create an Azure
- `Failed to purchase on Azure Marketplace due to signature verification on Marketplace legal agreement.`
- You haven't accepted the marketplace legal terms and privacy statements while provisioning the tier. Use the following command to accept the terms:
+ You haven't accepted the marketplace legal terms and privacy statements while provisioning the plan. Use the following command to accept the terms:
```azurecli az term accept \
spring-apps How To Map Dns Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-map-dns-virtual-network.md
Repeat these steps as needed to add a DNS record for other applications.
## DNS mapping with a custom domain
-Using this approach, you only need to add a DNS record for each Azure Spring Apps instance, but you must configure the custom domain for each application. For a core understanding of this process, see [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md).
+Using this approach, you only need to add a DNS record for each Azure Spring Apps instance, but you must configure the custom domain for each application. For a core understanding of this process, see [Map an existing custom domain to Azure Spring Apps](how-to-custom-domain.md).
This example reuses the private DNS zone `private.azuremicroservices.io` to add a custom domain related DNS record. The private FQDN has the format `<app-name>.<service-name>.private.azuremicroservices.io`.
When the custom domain is successfully mapped to the app, it appears in the cust
### Add TLS/SSL binding
-Before doing this step, make sure you've prepared your certificates and imported them into Azure Spring Apps. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md).
+Before doing this step, make sure you've prepared your certificates and imported them into Azure Spring Apps. For more information, see [Map an existing custom domain to Azure Spring Apps](how-to-custom-domain.md).
#### [Azure portal](#tab/Azure-portal)
spring-apps How To Troubleshoot Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-troubleshoot-enterprise-spring-cloud-gateway.md
You can now configure the alert rule details.
## Restart Gateway
-For some errors, a restart might help solve the issue. For more information, see the [Restart Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md#restart-spring-cloud-gateway) section of [Configure VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md).
+For some errors, a restart might help solve the issue. For more information, see the [Restart Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md#restart-vmware-spring-cloud-gateway) section of [Configure VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md).
## Next steps
spring-apps How To Use Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-accelerator.md
az spring application-accelerator customized-accelerator sync-cert \
You can enable App Accelerator under an existing Azure Spring Apps Enterprise plan instance using the Azure portal or Azure CLI.
-If a Dev tools public endpoint has already been exposed, you can enable App Accelerator, and then use <kbd>Ctrl</kbd>+<kbd>F5</kdb> to deactivate the browser cache to view it on the Dev Tools Portal.
- ### [Azure portal](#tab/Portal)
+If a Dev tools public endpoint has already been exposed, you can enable App Accelerator, and then use <kbd>Ctrl</kbd>+<kbd>F5</kdb> to deactivate the browser cache to view it on the Dev Tools Portal.
+ Use the following steps to enable App Accelerator under an existing Azure Spring Apps Enterprise plan instance using the Azure portal: 1. Navigate to your service resource, and then select **Developer Tools**.
spring-apps How To Use Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-application-live-view.md
az spring dev-tool create \
-## Use Application Live View in VS Code
+## Use Application Live View in VS Code
-You can access Application Live View directly in VS Code to monitor your apps in Azure Spring Apps Enterprise tier.
+You can access Application Live View directly in VS Code to monitor your apps in the Azure Spring Apps Enterprise plan.
### Prerequisites
If you try to open Application Live View for a service instance or an app that h
:::image type="content" source="media/how-to-use-application-live-view/visual-studio-code-not-enabled.png" alt-text="Screenshot of the error message showing Application Live View not enabled and public endpoint not accessible." lightbox="media/how-to-use-application-live-view/visual-studio-code-not-enabled.png":::
-To enable Application Live View and expose public endpoint, use either the Azure portal or the Azure CLI. For more information, see the [Manage Application Live View in existing Enterprise tier instances](#manage-application-live-view-in-existing-enterprise-plan-instances) section.
+To enable Application Live View and expose public endpoint, use either the Azure portal or the Azure CLI. For more information, see the [Manage Application Live View in existing Enterprise plan instances](#manage-application-live-view-in-existing-enterprise-plan-instances) section.
## Next steps
spring-apps How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-spring-cloud-gateway.md
The following table lists the route definitions. All the properties are optional
| title | A title to apply to methods in the generated OpenAPI documentation. | | description | A description to apply to methods in the generated OpenAPI documentation. | | uri | The full URI, which overrides the name of app that the requests route to. |
-| ssoEnabled | A value that indicates whether to enable SSO validation. See [Configure single sign-on](./how-to-configure-enterprise-spring-cloud-gateway.md#configure-single-sign-on-sso). |
+| ssoEnabled | A value that indicates whether to enable SSO validation. See [Configure single sign-on](./how-to-configure-enterprise-spring-cloud-gateway.md#configure-single-sign-on). |
| tokenRelay | Passes the currently authenticated user's identity token to the application. | | predicates | A list of predicates. See [Available Predicates](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-configuring-routes.html#available-predicates). | | filters | A list of filters. See [Available Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-configuring-routes.html#available-filters). |
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
This article provides the following options for deploying to Azure Spring Apps:
::: zone pivot="sc-enterprise" -- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
::: zone-end
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
Previously updated : 06/21/2023 Last updated : 08/09/2023 zone_pivot_groups: spring-apps-plan-selection
zone_pivot_groups: spring-apps-plan-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
- This article explains how to deploy a small application to run on Azure Spring Apps. The application code used in this tutorial is a simple app. When you've completed this example, the application is accessible online, and you can manage it through the Azure portal.
-This quickstart explains how to:
-> [!div class="checklist"]
-
-> - Generate a basic Spring project.
-> - Provision a service instance.
-> - Build and deploy an app with a public endpoint.
-> - Clean up the resources.
-
-At the end of this quickstart, you have a working Spring app running on Azure Spring Apps.
-
-## Prerequisites
--- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. --- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ::: zone-end --- [Apache Maven](https://maven.apache.org/download.cgi)-- [Azure CLI](/cli/azure/install-azure-cli). Install the Azure CLI extension for Azure Spring Apps Standard consumption and dedicated plan by using the following command:-
- ```azurecli-interactive
- az extension remove --name spring && \
- az extension add --name spring
- ```
--- Use the following commands to install the Azure Container Apps extension for the Azure CLI and register these namespaces: `Microsoft.App`, `Microsoft.OperationalInsights`, and `Microsoft.AppPlatform`:-
- ```azurecli-interactive
- az extension add --name containerapp --upgrade
- az provider register --namespace Microsoft.App
- az provider register --namespace Microsoft.OperationalInsights
- az provider register --namespace Microsoft.AppPlatform
- ```
-
-## Provision an instance of Azure Spring Apps
-
-Use the following steps to create an Azure Spring Apps service instance.
-
-1. Select **Open Cloudshell** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
-
- ```azurecli-interactive
- az account show
- ```
-
-1. Azure Cloud Shell workspaces are temporary. When first started, the shell prompts you to associate an Azure Storage instance with your subscription to persist files across sessions. For more information, see [Introduction to Azure Storage](../storage/common/storage-introduction.md).
-
- :::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of an Azure portal alert that no storage is mounted in the Azure Cloud Shell." lightbox="media/quickstart/azure-storage-subscription.png":::
-
-1. After you sign in successfully, use the following command to display a list of your subscriptions:
-
- ```azurecli-interactive
- az account list --output table
- ```
-
-1. Use the following command to set your default subscription:
-
- ```azurecli-interactive
- az account set --subscription <subscription-ID>
- ```
-
-1. Use the following commands to define variables for this quickstart with the names of your resources and desired settings:
-
- ```azurecli-interactive
- export LOCATION="<region>"
- export RESOURCE_GROUP="<resource-group-name>"
- export MANAGED_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
- export SERVICE_NAME="<Azure-Spring-Apps-instance-name>"
- export APP_NAME="<Spring-app-name>"
- ```
-
-1. Use the following command to create a resource group:
-
- ```azurecli-interactive
- az group create \
- --resource-group ${RESOURCE_GROUP} \
- --location ${LOCATION}
- ```
-
-1. An Azure Container Apps environment creates a secure boundary around a group of applications. Apps deployed to the same environment are deployed in the same virtual network and write logs to the same log analytics workspace. For more information, see [Log Analytics workspace overview](../azure-monitor/logs/log-analytics-workspace-overview.md). Use the following command to create the environment:
-
- ```azurecli-interactive
- az containerapp env create \
- --resource-group ${RESOURCE_GROUP} \
- --name ${MANAGED_ENVIRONMENT} \
- --location ${LOCATION} \
- --enable-workload-profiles
- ```
-
-1. Use the following command to create a variable to store the environment resource ID:
-
- ```azurecli-interactive
- export MANAGED_ENV_RESOURCE_ID=$(az containerapp env show \
- --resource-group ${RESOURCE_GROUP} \
- --name ${MANAGED_ENVIRONMENT} \
- --query id \
- --output tsv)
- ```
-
-1. Use the following command to create an Azure Spring Apps service instance. An instance of the Azure Spring Apps Standard consumption and dedicated plan is built on top of the Azure Container Apps environment. Create your Azure Spring Apps instance by specifying the resource ID of the environment you created.
-
- ```azurecli-interactive
- az spring create \
- --resource-group ${RESOURCE_GROUP} \
- --name ${SERVICE_NAME} \
- --managed-environment ${MANAGED_ENV_RESOURCE_ID} \
- --sku standardGen2 \
- --location ${LOCATION}
- ```
-
-## Create an app in your Azure Spring Apps instance
-
-An *App* is an abstraction of one business app. For more information, see [App and deployment in Azure Spring Apps](concept-understand-app-and-deployment.md). Apps run in an Azure Spring Apps service instance, as shown in the following diagram.
--
-You can create an app in either standard consumption or dedicated workload profiles.
-
-> [!IMPORTANT]
-> The consumption workload profile has a pay-as-you-go billing model with no starting cost. You're billed for the dedicated workload profile based on the provisioned resources. For more information, see [Workload profiles in Consumption + Dedicated plan structure environments in Azure Container Apps (preview)](../container-apps/workload-profiles-overview.md) and [Azure Spring Apps pricing](https://azure.microsoft.com/pricing/details/spring-apps/).
-
-### Create an app with consumption workload profile
-
-Use the following command to specify the app name on Azure Spring Apps and to allocate required resources:
-
-```azurecli-interactive
-az spring app create \
- --resource-group ${RESOURCE_GROUP} \
- --service ${SERVICE_NAME} \
- --name ${APP_NAME} \
- --cpu 1 \
- --memory 2Gi \
- --min-replicas 2 \
- --max-replicas 2 \
- --assign-endpoint true
-```
-
-Azure Spring Apps creates an empty welcome application and provides its URL in the field named `properties.url`.
--
-### Create an app with dedicated workload profile
-
-Dedicated workload profiles support running apps with customized hardware and increased cost predictability.
-
-Use the following command to create a dedicated workload profile:
-
-```azurecli-interactive
-az containerapp env workload-profile set \
- --resource-group ${RESOURCE_GROUP} \
- --name ${MANAGED_ENVIRONMENT} \
- --workload-profile-name my-wlp \
- --workload-profile-type D4 \
- --min-nodes 1 \
- --max-nodes 2
-```
-
-Then, use the following command to create an app with the dedicated workload profile:
-
-```azurecli-interactive
-az spring app create \
- --resource-group ${RESOURCE_GROUP} \
- --service ${SERVICE_NAME} \
- --name ${APP_NAME} \
- --cpu 1 \
- --memory 2Gi \
- --min-replicas 2 \
- --max-replicas 2 \
- --assign-endpoint true \
- --workload-profile my-wlp
-```
-
-## Clone and build the Spring Boot sample project
-
-Use the following steps to clone the Spring Boot sample project.
-
-1. Use the following command to clone the [Spring Boot sample project](https://github.com/spring-guides/gs-spring-boot.git) from GitHub.
-
- ```azurecli-interactive
- git clone -b boot-2.7 https://github.com/spring-guides/gs-spring-boot.git
- ```
-
-1. Use the following command to move to the project folder:
-
- ```azurecli-interactive
- cd gs-spring-boot/complete
- ```
+## 1. Prerequisites
-1. Use the following [Maven](https://maven.apache.org/what-is-maven.html) command to build the project.
-
- ```azurecli-interactive
- mvn clean package -DskipTests
- ```
-
-## Deploy the local app to Azure Spring Apps
-
-Use the following command to deploy the *.jar* file for the app:
-
-```azurecli-interactive
-az spring app deploy \
- --resource-group ${RESOURCE_GROUP} \
- --service ${SERVICE_NAME} \
- --name ${APP_NAME} \
- --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar \
- --env testEnvKey=testEnvValue \
- --runtime-version Java_11 \
- --jvm-options '-Xms1024m -Xmx2048m'
-```
-Deploying the application can take a few minutes.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
::: zone-end ::: zone pivot="sc-standard"
-## [Azure CLI](#tab/Azure-CLI)
--- [Apache Maven](https://maven.apache.org/download.cgi)-- [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`-
-## Provision an instance of Azure Spring Apps
-
-Use the following steps to create an Azure Spring Apps service instance.
-
-1. Select **Open Cloudshell** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
-
- ```azurecli-interactive
- az account show
- ```
-
-1. Azure Cloud Shell workspaces are temporary. When first started, the shell prompts you to associate an Azure Storage instance with your subscription to persist files across sessions. For more information, see [Introduction to Azure Storage](../storage/common/storage-introduction.md).
-
- :::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of an Azure portal alert that no storage is mounted in the Azure Cloud Shell." lightbox="media/quickstart/azure-storage-subscription.png":::
-
-1. After you sign in successfully, use the following command to display a list of your subscriptions:
-
- ```azurecli-interactive
- az account list --output table
- ```
-
-1. Use the following command to set your default subscription:
-
- ```azurecli-interactive
- az account set --subscription <subscription-ID>
- ```
-
-1. Use the following command to create a resource group:
-
- ```azurecli-interactive
- az group create \
- --resource-group <name-of-resource-group> \
- --location eastus
- ```
-
-1. Use the following command to create an Azure Spring Apps service instance:
-
- ```azurecli-interactive
- az spring create \
- --resource-group <name-of-resource-group> \
- --name <Azure-Spring-Apps-instance-name>
- ```
-
-1. Select **Y** to install the Azure Spring Apps extension and run it.
-
-## Create an app in your Azure Spring Apps instance
-
-An *App* is an abstraction of one business app. For more information, see [App and deployment in Azure Spring Apps](concept-understand-app-and-deployment.md). Apps run in an Azure Spring Apps service instance, as shown in the following diagram.
--
-Use the following command to specify the app name on Azure Spring Apps as `hellospring`:
-
-```azurecli-interactive
-az spring app create \
- --resource-group <name-of-resource-group> \
- --service <Azure-Spring-Apps-instance-name> \
- --name hellospring \
- --assign-endpoint true
-```
-
-## Clone and build the Spring Boot sample project
-
-Use the following steps to clone the Spring Boot sample project.
-
-1. Use the following command to clone the [Spring Boot sample project](https://github.com/spring-guides/gs-spring-boot.git) from GitHub.
+### [Azure portal](#tab/Azure-portal)
- ```azurecli-interactive
- git clone -b boot-2.7 https://github.com/spring-guides/gs-spring-boot.git
- ```
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
-1. Use the following command to move to the project folder:
+### [Azure Developer CLI](#tab/Azure-Developer-CLI)
- ```azurecli-interactive
- cd gs-spring-boot/complete
- ```
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure Developer CLI(AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.
-1. Use the following [Maven](https://maven.apache.org/what-is-maven.html) command to build the project.
-
- ```azurecli-interactive
- mvn clean package -DskipTests
- ```
+
-## Deploy the local app to Azure Spring Apps
-Use the following command to deploy the *.jar* file for the app (*target/spring-boot-complete-0.0.1-SNAPSHOT.jar* on Windows):
-```azurecli-interactive
-az spring app deploy \
- --resource-group <name-of-resource-group> \
- --service <Azure-Spring-Apps-instance-name> \
- --name hellospring \
- --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar
-```
+## [Azure CLI](#tab/Azure-CLI)
-Deploying the application can take a few minutes. After deployment, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
## [IntelliJ](#tab/IntelliJ)
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
- [IntelliJ IDEA](https://www.jetbrains.com/idea/). - [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
-## Generate a Spring project
-
-Use the following steps to create the project:
-
-1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
-
- ```url
- https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
- ```
-
- The following image shows the recommended Initializr settings for the `hellospring` sample project.
-
- This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
-
- :::image type="content" source="media/quickstart/initializr-page.png" alt-text="Screenshot of Spring Initializr settings with Java options highlighted." lightbox="media/quickstart/initializr-page.png":::
-
-1. When all dependencies are set, select **Generate**.
-1. Download and unpack the package, and then create a web controller for your web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
-
- ```java
- package com.example.hellospring;
-
- import org.springframework.web.bind.annotation.RestController;
- import org.springframework.web.bind.annotation.RequestMapping;
-
- @RestController
- public class HelloController {
-
- @RequestMapping("/")
- public String index() {
- return "Greetings from Azure Spring Apps!";
- }
- }
- ```
-
-## Create an instance of Azure Spring Apps
-
-Use the following steps to create an instance of Azure Spring Apps using the Azure portal.
-
-1. In a new tab, open the [Azure portal](https://portal.azure.com/).
-
-1. From the top search box, search for **Azure Spring Apps**.
-
-1. Select **Azure Spring Apps** from the results.
-
- :::image type="content" source="media/quickstart/spring-apps-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results." lightbox="media/quickstart/spring-apps-start.png":::
-
-1. On the Azure Spring Apps page, select **Create**.
-
- :::image type="content" source="media/quickstart/spring-apps-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted." lightbox="media/quickstart/spring-apps-create.png":::
-
-1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
-
- - **Subscription**: Select the subscription you want to be billed for this resource.
- - **Resource group**: Creating new resource groups for new resources is a best practice.
- - **Name**: Specify the service instance name.
- - **Plan**: Select the *Standard* plan for your service instance.
- - **Region**: Select the region for your service instance.
- - **Zone Redundant**: Select the zone redundant checkout to create your Azure Spring Apps service in an Azure availability zone.
-
- :::image type="content" source="media/quickstart/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page." lightbox="media/quickstart/portal-start.png":::
-
-1. Select **Review and Create** to review your selections. Select **Create** to provision the Azure Spring Apps instance.
-
-## Import the project
-
-Use the following steps to import the project.
-
-1. Open IntelliJ IDEA, and then select **Open**.
-1. In the **Open File or Project** dialog box, select the *hellospring* folder.
-
- :::image type="content" source="media/quickstart/intellij-new-project.png" alt-text="Screenshot of IntelliJ IDEA showing Open File or Project dialog box." lightbox="media/quickstart/intellij-new-project.png":::
-
-## Build and deploy your app
-
-> [!NOTE]
-> To run the project locally, add `spring.config.import=optional:configserver:` to the project's *application.properties* file.
-
-Use the following steps to build and deploy your app.
-
-1. If you haven't already installed the Azure Toolkit for IntelliJ, follow the steps in [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
-
-1. Right-click your project in the IntelliJ Project window, and then select **Azure** -> **Deploy to Azure Spring Apps**.
-
- :::image type="content" source="media/quickstart/intellij-deploy-azure.png" alt-text="Screenshot of IntelliJ IDEA menu showing Deploy to Azure Spring Apps option." lightbox="media/quickstart/intellij-deploy-azure.png":::
-
-1. Accept the name for the app in the **Name** field. **Name** refers to the configuration, not the app name. You don't usually need to change it.
-1. In the **Artifact** textbox, select **Maven:com.example:hellospring-0.0.1-SNAPSHOT**.
-1. In the **Subscription** textbox, verify that your subscription is correct.
-1. In the **Service** textbox, select the instance of Azure Spring Apps that you created in the [Provision an instance of Azure Spring Apps](#provision-an-instance-of-azure-spring-apps-1) section.
-1. In the **App** textbox, select the plus sign (**+**) to create a new app.
-
- :::image type="content" source="media/quickstart/intellij-create-new-app.png" alt-text="Screenshot of IntelliJ IDEA showing Deploy Azure Spring Apps dialog box." lightbox="media/quickstart/intellij-create-new-app.png":::
-
-1. In the **App name:** textbox under **App Basics**, enter *hellospring*, and then select the **More settings** check box.
-1. Select the **Enable** button next to **Public endpoint**. The button changes to **Disable \<to be enabled\>**.
-1. If you're using Java 11, select **Java 11** for the **Runtime** option.
-1. Select **OK**.
-
- :::image type="content" source="media/quickstart/intellij-more-settings.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with public endpoint Disable button highlighted." lightbox="media/quickstart/intellij-more-settings.png":::
-
-1. Under **Before launch**, select **Run Maven Goal 'hellospring:package'**, and then select the pencil icon to edit the command line.
-
- :::image type="content" source="media/quickstart/intellij-edit-maven-goal.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with Maven Goal edit button highlighted." lightbox="media/quickstart/intellij-edit-maven-goal.png":::
-
-1. In the **Command line** textbox, enter *-DskipTests* after *package*, and then select **OK**.
-
- :::image type="content" source="media/quickstart/intellij-maven-goal-command-line.png" alt-text="Screenshot of IntelliJ IDEA Select Maven Goal dialog box with Command Line value highlighted." lightbox="media/quickstart/intellij-maven-goal-command-line.png":::
-
-1. To start the deployment, select the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog box. The plug-in runs the command `mvn package -DskipTests` on the `hellospring` app and deploys the *.jar* file generated by the `package` command.
-
-Deploying the application can take a few minutes. After deployment, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
- ## [Visual Studio Code](#tab/visual-studio-code)
-> [!NOTE]
-> To deploy a Spring Boot web app to Azure Spring Apps by using Visual Studio Code, follow the steps in [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps).
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Visual Studio Code](https://code.visualstudio.com/).
::: zone-end -
-## [Azure CLI](#tab/Azure-CLI)
--- [Apache Maven](https://maven.apache.org/download.cgi)-- [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`-
-## Provision an instance of Azure Spring Apps
-
-Use the following steps to create an Azure Spring Apps service instance.
-
-1. Select **Open Cloudshell** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
-
- ```azurecli-interactive
- az account show
- ```
-
-1. Azure Cloud Shell workspaces are temporary. When first started, the shell prompts you to associate an Azure Storage instance with your subscription to persist files across sessions. For more information, see [Introduction to Azure Storage](../storage/common/storage-introduction.md).
-
- :::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of an Azure portal alert that no storage is mounted in the Azure Cloud Shell." lightbox="media/quickstart/azure-storage-subscription.png":::
-
-1. After you sign in successfully, use the following command to display a list of your subscriptions:
-
- ```azurecli-interactive
- az account list --output table
- ```
-1. Use the following command to set your default subscription:
- ```azurecli-interactive
- az account set --subscription <subscription-ID>
- ```
-1. Use the following command to create a resource group:
- ```azurecli-interactive
- az group create \
- --resource-group <name-of-resource-group> \
- --location eastus
- ```
-1. Use the following commands to accept the legal terms and privacy statements for the Enterprise plan. This step is necessary only if your subscription has never been used to create an Enterprise plan instance of Azure Spring Apps.
- ```azurecli-interactive
- az provider register --namespace Microsoft.SaaS
- az term accept \
- --publisher vmware-inc \
- --product azure-spring-cloud-vmware-tanzu-2 \
- --plan asa-ent-hr-mtr
- ```
-1. Use the following command to create an Azure Spring Apps service instance:
- ```azurecli-interactive
- az spring create \
- --resource-group <name-of-resource-group> \
- --name <Azure-Spring-Apps-instance-name> \
- --sku Enterprise
- ```
-## Create an app in your Azure Spring Apps instance
+## 5. Validate the app
-An *App* is an abstraction of one business app. For more information, see [App and deployment in Azure Spring Apps](concept-understand-app-and-deployment.md). Apps run in an Azure Spring Apps service instance, as shown in the following diagram.
+After deployment, you can access the app at `https://<your-Azure-Spring-Apps-instance-name>-demo.azuremicroservices.io`. When you open the app, you get the response `Hello World`.
-Use the following command to specify the app name on Azure Spring Apps as `hellospring`:
+Use the following command to check the app's log to investigate any deployment issue:
-```azurecli-interactive
-az spring app create \
- --resource-group <name-of-resource-group> \
- --service <Azure-Spring-Apps-instance-name> \
- --name hellospring \
- --assign-endpoint true
+```azurecli
+az spring app logs \
+ --service ${SERVICE_NAME} \
+ --name ${APP_NAME}
```
-## Clone and build the Spring Boot sample project
-
-Use the following steps to clone the Spring Boot sample project.
-
-1. Use the following command to clone the [Spring Boot sample project](https://github.com/spring-guides/gs-spring-boot.git) from GitHub.
- ```azurecli-interactive
- git clone -b boot-2.7 https://github.com/spring-guides/gs-spring-boot.git
- ```
-1. Use the following command to move to the project folder:
+From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
- ```azurecli-interactive
- cd gs-spring-boot/complete
- ```
-1. Use the following [Maven](https://maven.apache.org/what-is-maven.html) command to build the project.
- ```azurecli-interactive
- mvn clean package -DskipTests
- ```
-## Deploy the local app to Azure Spring Apps
+## [Azure CLI](#tab/Azure-CLI)
-Use the following command to deploy the *.jar* file for the app (*target/spring-boot-complete-0.0.1-SNAPSHOT.jar* on Windows):
+Use the following command to check the app's log to investigate any deployment issue:
-```azurecli-interactive
-az spring app deploy \
- --resource-group <name-of-resource-group> \
- --service <Azure-Spring-Apps-instance-name> \
- --name hellospring \
- --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar
+```azurecli
+az spring app logs \
+ --service ${SERVICE_NAME} \
+ --name ${APP_NAME}
```
-Deploying the application can take a few minutes. After deployment, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
- ## [IntelliJ](#tab/IntelliJ) -- [IntelliJ IDEA](https://www.jetbrains.com/idea/).-- [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).-
-## Generate a Spring project
-
-Use the following steps to create the project:
-
-1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
-
- ```url
- https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
- ```
-
- The following image shows the recommended Initializr settings for the `hellospring` sample project.
-
- This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
-
- :::image type="content" source="media/quickstart/initializr-page.png" alt-text="Screenshot of Spring Initializr settings with Java options highlighted." lightbox="media/quickstart/initializr-page.png":::
-
-1. When all dependencies are set, select **Generate**.
-1. Download and unpack the package, and then create a web controller for your web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
-
- ```java
- package com.example.hellospring;
-
- import org.springframework.web.bind.annotation.RestController;
- import org.springframework.web.bind.annotation.RequestMapping;
-
- @RestController
- public class HelloController {
-
- @RequestMapping("/")
- public String index() {
- return "Greetings from Azure Spring Apps!";
- }
- }
- ```
-
-## Create an instance of Azure Spring Apps
-
-Use the following steps to create an instance of Azure Spring Apps using the Azure portal.
-
-1. In a new tab, open the [Azure portal](https://portal.azure.com/).
-
-1. From the top search box, search for **Azure Spring Apps**.
-
-1. Select **Azure Spring Apps** from the results.
-
- :::image type="content" source="media/quickstart/spring-apps-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results." lightbox="media/quickstart/spring-apps-start.png":::
-
-1. On the Azure Spring Apps page, select **Create**.
-
- :::image type="content" source="media/quickstart/spring-apps-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted." lightbox="media/quickstart/spring-apps-create.png":::
-
-1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
-
- - **Subscription**: Select the subscription you want to be billed for this resource.
- - **Resource group**: Creating new resource groups for new resources is a best practice.
- - **Name**: Specify the service instance name.
- - **Plan**: Select the *Enterprise* plan for your service instance.
- - **Region**: Select the region for your service instance.
- - **Zone Redundant**: Select the zone redundant checkout to create your Azure Spring Apps service in an Azure availability zone.
- - **Plan**: Pay as you go with Azure Spring Apps.
- - **Terms**: It's required to select the agreement checkbox associated with [Marketplace offering](https://aka.ms/ascmpoffer).
-
- :::image type="content" source="media/quickstart/enterprise-plan-creation.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create with enterprise plan page." lightbox="media/quickstart/enterprise-plan-creation.png":::
-
-1. Select **Review and Create** to review your selections. Select **Create** to provision the Azure Spring Apps instance.
-
-## Import the project
-
-Use the following steps to import the project.
-
-1. Open IntelliJ IDEA, and then select **Open**.
-1. In the **Open File or Project** dialog box, select the *hellospring* folder.
+Use the following steps to stream your application logs:
- :::image type="content" source="media/quickstart/intellij-new-project.png" alt-text="Screenshot of IntelliJ IDEA showing Open File or Project dialog box." lightbox="media/quickstart/intellij-new-project.png":::
+1. Open the **Azure Explorer** window, expand the node **Azure**, expand the service node **Azure Spring Apps**, expand the Azure Spring Apps instance you created, and then select the *demo* instance of the app you created.
+2. Right-click and select **Start Streaming Logs**, then select **OK** to see real-time application logs.
-## Build and deploy your app
-
-> [!NOTE]
-> To run the project locally, add `spring.config.import=optional:configserver:` to the project's *application.properties* file.
-
-Use the following steps to build and deploy your app.
-
-1. If you haven't already installed the Azure Toolkit for IntelliJ, follow the steps in [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
-
-1. Right-click your project in the IntelliJ Project window, and then select **Azure** -> **Deploy to Azure Spring Apps**.
-
- :::image type="content" source="media/quickstart/intellij-deploy-azure.png" alt-text="Screenshot of IntelliJ IDEA menu showing Deploy to Azure Spring Apps option." lightbox="media/quickstart/intellij-deploy-azure.png":::
-
-1. Accept the name for the app in the **Name** field. **Name** refers to the configuration, not the app name. You don't usually need to change it.
-1. In the **Artifact** textbox, select **Maven:com.example:hellospring-0.0.1-SNAPSHOT**.
-1. In the **Subscription** textbox, verify that your subscription is correct.
-1. In the **Service** textbox, select the instance of Azure Spring Apps that you created in the [Provision an instance of Azure Spring Apps](#provision-an-instance-of-azure-spring-apps-1) section.
-1. In the **App** textbox, select the plus sign (**+**) to create a new app.
-
- :::image type="content" source="media/quickstart/intellij-create-new-app.png" alt-text="Screenshot of IntelliJ IDEA showing Deploy Azure Spring Apps dialog box." lightbox="media/quickstart/intellij-create-new-app.png":::
-
-1. In the **App name:** textbox under **App Basics**, enter *hellospring*, and then select the **More settings** check box.
-1. Select the **Enable** button next to **Public endpoint**. The button changes to **Disable \<to be enabled\>**.
-1. If you're using Java 11, select **Java 11** for the **Runtime** option.
-1. Select **OK**.
-
- :::image type="content" source="media/quickstart/intellij-more-settings.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with public endpoint Disable button highlighted." lightbox="media/quickstart/intellij-more-settings.png":::
-
-1. Under **Before launch**, select **Run Maven Goal 'hellospring:package'**, and then select the pencil icon to edit the command line.
-
- :::image type="content" source="media/quickstart/intellij-edit-maven-goal.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with Maven Goal edit button highlighted." lightbox="media/quickstart/intellij-edit-maven-goal.png":::
-
-1. In the **Command line** textbox, enter *-DskipTests* after *package*, and then select **OK**.
-
- :::image type="content" source="media/quickstart/intellij-maven-goal-command-line.png" alt-text="Screenshot of IntelliJ IDEA Select Maven Goal dialog box with Command Line value highlighted." lightbox="media/quickstart/intellij-maven-goal-command-line.png":::
-
-1. To start the deployment, select the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog box. The plug-in runs the command `mvn package -DskipTests` on the `hellospring` app and deploys the *.jar* file generated by the `package` command.
-
-Deploying the application can take a few minutes. After deployment, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
+ :::image type="content" source="media/quickstart/app-stream-log.png" alt-text="Screenshot of IntelliJ that shows the Azure Streaming Log." lightbox="media/quickstart/app-stream-log.png":::
## [Visual Studio Code](#tab/visual-studio-code)
-> [!NOTE]
-> To deploy a Spring Boot web app to Azure Spring Apps by using Visual Studio Code, follow the steps in [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps).
+To stream your application logs, follow the steps in the [Stream your application logs](https://code.visualstudio.com/docs/java/java-spring-apps#_stream-your-application-logs) section of [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps).
::: zone-end
-## Clean up resources
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following commands to delete the resource group:
+## 7. Next steps
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
+> [!div class="nextstepaction"]
+> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Use Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
-## Next steps
+> [!div class="nextstepaction"]
+> [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
-In this quickstart, you learned how to build and deploy a Spring app in an Azure Spring Apps service instance. You also learned how to deploy an app with a public endpoint, and how to clean up resources.
+> [!div class="nextstepaction"]
+> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md)
-You have access to powerful logs, metrics, and distributed tracing capability from the Azure portal. For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md).
+> [!div class="nextstepaction"]
+> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
-To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample application to Azure Spring Apps:
> [!div class="nextstepaction"] > [Introduction to the sample app](./quickstart-sample-app-introduction.md)
-To learn how to create a Standard consumption and dedicated plan in Azure Spring Apps for app deployment, advance to the Standard consumption and dedicated quickstart series:
> [!div class="nextstepaction"]
-> [Provision an Azure Spring Apps Standard consumption and dedicated plan service instance](./quickstart-provision-standard-consumption-service-instance.md)
+> [Introduction to the Fitness Store sample app](./quickstart-sample-app-acme-fitness-store-introduction.md)
::: zone-end
-For a packaged app template with Azure Spring Apps infrastructure provisioned using Bicep, see [Spring Boot PetClinic Microservices Application Deployed to Azure Spring Apps](https://github.com/Azure-Samples/apptemplates-microservices-spring-app-on-AzureSpringApps).
+For more information, see the following articles:
-More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
spring-apps Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-circuit-breaker.md
mvn clean package -D skipTests -f hystrix-turbine/pom.xml
## Provision your Azure Spring Apps instance
-Follow the steps in the [Provision an instance of Azure Spring Apps](./quickstart.md#provision-an-instance-of-azure-spring-apps) section of [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md).
+Follow the steps in the [Provision an instance of Azure Spring Apps](./quickstart.md#32-create-an-azure-spring-apps-instance) section of [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md).
## Deploy your applications to Azure Spring Apps
As a web app, Hystrix dashboard should be working on `test-endpoint`. If it isn'
## Next steps
-* [Provision an instance of Azure Spring Apps](./quickstart.md#provision-an-instance-of-azure-spring-apps) section of [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md).
+* [Provision an instance of Azure Spring Apps](./quickstart.md#32-create-an-azure-spring-apps-instance) section of [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md).
* [Prepare a Java Spring application for deployment in Azure Spring Apps](how-to-prepare-app-deployment.md)
spring-apps Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/whats-new.md
The following updates are now available in the Enterprise plan:
- **High Availability support for App Accelerator and App Live View**: App Accelerator and App Live View now support multiple replicas to offer high availability. For more information, see [Configure Tanzu Dev Tools in the Azure Spring Apps Enterprise plan](how-to-use-dev-tool-portal.md). -- **Spring Cloud Gateway auto scaling**: Spring Cloud Gateway now supports auto scaling to better serve the elastic traffic without the hassle of manual scaling. For more information, see the [Set up autoscale settings for VMware Spring Cloud Gateway in Azure CLI](how-to-configure-enterprise-spring-cloud-gateway.md?tabs=Azure-portal#set-up-autoscale-settings-for-vmware-spring-cloud-gateway-in-azure-cli) section of [Configure VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md).
+- **Spring Cloud Gateway auto scaling**: Spring Cloud Gateway now supports auto scaling to better serve the elastic traffic without the hassle of manual scaling. For more information, see the [Set up autoscale settings](how-to-configure-enterprise-spring-cloud-gateway.md?tabs=Azure-portal#set-up-autoscale-settings) section of [Configure VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md).
- **Application Configuration Service ΓÇô polyglot support**: This update enables you to use Application Configuration Service to manage external configurations for any polyglot app, such as .NET, Go, and so on. For more information, see the [Polyglot support](how-to-enterprise-application-configuration-service.md#polyglot-support) section of [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md).
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md
Previously updated : 06/28/2022 Last updated : 08/10/2023
If you don't already have the [Azure Static Web Apps extension for Visual Studio
3. Enter **Create static web app** in the command box.
-4. Select *Azure Static Web Apps: Create static web app...* and select **Enter**.
+4. Select *Azure Static Web Apps: Create static web app...*.
# [No Framework](#tab/vanilla-javascript)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
| | | | Name | Enter **my-first-static-web-app** | | Region | Select the region closest to you. |
- | Build preset | Select **Custom**. |
+ | Framework | Select **Custom**. |
# [Angular](#tab/angular)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
| | | | Name | Enter **my-first-static-web-app** | | Region | Select the region closest to you. |
- | Build preset | Select **Angular**. |
+ | Framework | Select **Angular**. |
# [Blazor](#tab/blazor)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
| | | | Name | Enter **my-first-static-web-app** | | Region | Select the region closest to you. |
- | Build preset | Select **Blazor**. |
+ | Framework | Select **Blazor**. |
# [React](#tab/react)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
| | | | Name | Enter **my-first-static-web-app** | | Region | Select the region closest to you. |
- | Build preset | Select **React**. |
+ | Framework | Select **React**. |
# [Vue](#tab/vue)
If you don't already have the [Azure Static Web Apps extension for Visual Studio
| | | | Name | Enter **my-first-static-web-app** | | Region | Select the region closest to you. |
- | Build preset | Select **Vue.js**. |
+ | Framework | Select **Vue.js**. |
static-web-apps Publish Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-azure-resource-manager.md
This article uses a GitHub template repository to make it easy for you to get st
> [!NOTE] > Azure Static Web Apps requires at least one HTML file to create a web app. The repository you create in this step includes a single _https://docsupdatetracker.net/index.html_ file.
-1. Select **Create repository from template**.
+1. Select **Create repository**.
- :::image type="content" source="./media/getting-started/create-template.png" alt-text="Create repository from template":::
+ :::image type="content" source="./media/getting-started/create-template.png" alt-text="screenshot of the Create repository button.":::
## Create the ARM Template
storage Access Tiers Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-best-practices.md
description: Learn about best practice guidelines that help you use access tiers
Previously updated : 05/30/2023 Last updated : 08/10/2023
To identify the most optimal access tier, try to estimate what percentage of the
To model and analyze the cost of using cool or cold versus archive storage, see [Archive versus cold and cool](archive-cost-estimation.md#archive-versus-cold-and-cool). You can apply similar modeling techniques to compare the cost of hot to cool, cold or archive.
-> [!IMPORTANT]
-> The cold tier is currently in PREVIEW. To learn more, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
- ## Migrate data directly to the most cost-efficient access tiers Choosing the most optimal tier up front can reduce costs. If you change the tier of a block blob that you've already uploaded, then you'll pay the cost of writing to the initial tier when you first upload the blob, and then pay the cost of writing to the desired tier. If you change tiers by using a lifecycle management policy, then that policy will require a day to take effect and a day to complete execution. You'll also incur the capacity cost of storing data in the initial tier prior to the tier change.
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
description: Learn how to specify a blob's access tier when you upload it, or ho
Previously updated : 06/22/2023 Last updated : 08/10/2023
This article describes how to manage a blob in an online access tier. For more i
For more information about access tiers for blobs, see [Access tiers for blob data](access-tiers-overview.md).
-> [!IMPORTANT]
-> The cold tier is currently in PREVIEW and is available in the all public regions.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> To enroll, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
- ## Set the default access tier for a storage account The default access tier setting for a general-purpose v2 storage account determines in which online tier a new blob is created by default. You can set the default access tier for a general-purpose v2 storage account at the time that you create the account or by updating an existing account's configuration.
To upload a blob or set of blobs to a specific tier from the Azure portal, follo
4. Expand the **Advanced** section, and set the **Access tier** to *Hot* or *Cool*.
- > [!NOTE]
- > The cold tier is in preview and appears as an option if the storage account is in a region that supports the preview.
- 5. Select the **Upload** button. :::image type="content" source="media/access-tiers-online-manage/upload-blob-to-online-tier-portal.png" alt-text="Screenshot showing how to upload blobs to an online tier in the Azure portal.":::
az storage blob set-tier \
#### [AzCopy](#tab/azcopy)
-To change a blob's tier to a cooler tier, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter.
+To change a blob's tier to a cooler tier, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter.
> [!IMPORTANT] > Using AzCopy to change a blob's access tier is currently in PREVIEW.
To change a blob's tier to a cooler tier, use the [azcopy set-properties](..\com
azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>' --block-blob-tier=<tier> ```
-> [!NOTE]
-> Setting the `--block-blob-tier` parameter to `cold` is not yet supported. If you want to change a blob's tier to the `cold` tier, [enroll](https://forms.office.com/r/788B1gr3Nq) in the cold tier preview, and then change the blob's tier to cold by using the Azure portal, PowerShell, or the Azure CLI.
- To change the access tier for all blobs in a virtual directory, refer to the virtual directory name instead of the blob name, and then append `--recursive=true` to the command. ```azcopy
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
description: Azure storage offers different access tiers so that you can store y
Previously updated : 07/13/2023 Last updated : 08/10/2023
Data stored in the cloud grows at an exponential pace. To manage costs for your
- **Cold tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the cold tier should be stored for a minimum of **90** days. The cold tier has lower storage costs and higher access costs compared to the cool tier. - **Archive tier** - An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the archive tier should be stored for a minimum of 180 days.
-> [!IMPORTANT]
-> The cold tier is currently in PREVIEW and is available in all public regions.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> To enroll, see [Cold tier (preview)](#cold-tier-preview).
- Azure storage capacity limits are set at the account level, rather than according to access tier. You can choose to maximize your capacity usage in one tier, or to distribute capacity across two or more tiers. > [!NOTE]
Blob storage lifecycle management offers a rule-based policy that you can use to
The following table summarizes the features of the hot, cool, cold, and archive access tiers.
-| | **Hot tier** | **Cool tier** | **Cold tier (preview)** |**Archive tier** |
+| | **Hot tier** | **Cool tier** | **Cold tier** |**Archive tier** |
|--|--|--|--|--| | **Availability** | 99.9% | 99% | 99% | 99% | | **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.9% | 99.9% | 99.9% |
The following table summarizes how tier changes are billed.
Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, might result in more charges. For information about blobs with versioning enabled, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the blob versioning documentation. For information about blobs with snapshots, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the blob snapshots documentation.
-## Cold tier (preview)
-
-The cold tier is currently in PREVIEW and is available in all public regions except Poland Central and Qatar Central.
-
-### Enrolling in the preview
+## Cold tier
-You can validate cold tier on a general-purpose v2 storage account from any subscription in Azure public cloud. It's still recommended to share your scenario in the [preview form](https://forms.office.com/r/788B1gr3Nq).
+The cold tier is now generally available in all public and Azure Government regions except Poland Central and Qatar Central.
### Limitations and known issues
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container public access setting to make containers and blobs available for anonymous access. -+ Last updated 11/09/2022
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
description: Learn how to remediate anonymous public read access to blob data for both Azure Resource Manager and classic storage accounts. -+ Last updated 11/09/2022
storage Anonymous Read Access Prevent Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md
description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous public access to containers. -+ Last updated 11/09/2022
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
description: Learn how to analyze anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container. -+ Last updated 05/23/2023
storage Archive Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md
description: Learn how to calculate the cost of storing and maintaining data in
Previously updated : 05/30/2023 Last updated : 08/10/2023
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool, cold, and archive tiers. The following table compares the cost of archive storage with the cost of cool and cold storage by using the [Sample prices](#sample-prices) that appear in this article. This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes 1 read each month about 10% of stored capacity (1024 GB), and 10% of total transactions (20,000).-
-> [!IMPORTANT]
-> The cold tier is currently in PREVIEW. To learn more, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
- <br><br> <table>
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
description: Learn how to assign permissions for blob data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 04/19/2022
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
description: When you access blob data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key. -+ Last updated 12/10/2021
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
description: PowerShell supports signing in with Azure AD credentials to run commands on blob data in Azure Storage. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal. -+ Last updated 05/12/2022
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
description: Manage blobs with Azure CLI
-+ Last updated 03/02/2022 ms.devlang: azurecli
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-cli.md
description: Learn how to manage Azure storage containers using Azure CLI
-+ Last updated 02/05/2022
storage Blob Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-portal.md
description: Learn how to manage Azure storage containers using the Azure portal
-+ Last updated 07/18/2022
storage Blob Containers Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-powershell.md
description: Learn how to manage Azure storage containers using Azure PowerShell
-+ Last updated 10/03/2022
storage Blob Inventory Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory-faq.md
- Title: Azure Storage blob inventory FAQ
-description: In this article, learn about frequently asked questions about Azure Storage blob inventory
---- Previously updated : 08/01/2023-----
-# Azure Storage blob inventory frequently asked questions
-
-This article provides answers to some of the most common questions about Azure Storage blob inventory.
-
-## Multiple inventory file output
-
-Blob Inventory report produces three types of files. See [Inventory files](blob-inventory.md#inventory-files). Existing customers using blob inventory might see a change in the number of inventory files, from one file to multiple files. Today, we already have manifest file that provides the list of files. This behavior remains unchanged, so these files are listed in the manifest file.
-
-### Why was the change made?
-
-The change was implemented to enhance blob inventory performance, particularly for large storage accounts containing over five million objects. Now, results are written in parallel to multiple files, eliminating the bottleneck of using a single inventory file. This change was prompted by customer feedback, as they reported difficulties in opening and working with the excessively large single inventory file.
-
-### How does this change affect me as a user?
-
-As a user, this change has a positive impact on your experience with blob inventory runs. It's expected to enhance performance and reduce the overall running time. However, to fully benefit from this improvement, you must ensure that your code is updated to process multiple results files instead of just one. This adjustment aligns your code with the new approach and optimizes the handling of inventory data.
-
-### Is my existing data affected?
-
-No, existing data isn't affected. Only new blob inventory results have multiple inventory files.
-
-### Will there be any downtime or service interruptions?
-
-No, the change happens seamlessly.
-
-### Is there anything I need to do differently now?
-
-Your required actions depend on how you currently process blob inventory results:
--- If your current processing assumes a single inventory results file, then you need to modify your code to accommodate multiple inventory results files.--- However, if your current processing involves reading the list of results files from the manifest file, there's no need to make any changes to how you process the results. The existing approach continues to work seamlessly with the updated feature.-
-### Can I revert to the previous behavior if I don't like the change?
-
-This isn't recommended, but it's possible. Please work through your support channels to ask to turn off this feature.
-
-### How can I provide feedback or report issues related to the changes?
-
-Please work through your current account team and support channels.
-
-### When will this change take effect?
-
-This change will start gradual rollout starting September 1, 2023.
-
-## Next steps
--- [Azure Storage blob inventory](blob-inventory.md)--- [Enable Azure Storage blob inventory reports](blob-inventory-how-to.md)--- [Calculate the count and total size of blobs per container](calculate-blob-count-size.md)--- [Tutorial: Analyze blob inventory reports](storage-blob-inventory-report-analytics.md)--- [Manage the Azure Blob Storage lifecycle](./lifecycle-management-overview.md)
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Each inventory rule generates a set of files in the specified inventory destinat
Each inventory run for a rule generates the following files: -- **Inventory file:** An inventory run for a rule generates one or more CSV or Apache Parquet formatted files. If the matched object count is large, then multiple files are generated instead of a single file. Each such file contains matched objects and their metadata.
+- **Inventory file:** An inventory run for a rule generates multiple CSV or Apache Parquet formatted files. Each such file contains matched objects and their metadata.
- > [!NOTE]
- > Reports in the Apache Parquet format present dates in the following format: `timestamp_millis [number of milliseconds since 1970-01-01 00:00:00 UTC`].
-
- For a CSV formatted file, the first row is always the schema row. The following image shows an inventory CSV file opened in Microsoft Excel.
+ > [!IMPORTANT]
+ > Until September 8, 2023, runs can produce a singe inventory file in cases where the matched object count is small. After September 8, 2023, all runs will produce multiple files regardless of the matched object count. To learn more, see [Multiple inventory file output FAQ](storage-blob-faq.yml#multiple-inventory-file-output).
+
+ Reports in the Apache Parquet format present dates in the following format: `timestamp_millis [number of milliseconds since 1970-01-01 00:00:00 UTC`]. For a CSV formatted file, the first row is always the schema row. The following image shows an inventory CSV file opened in Microsoft Excel.
:::image type="content" source="./media/blob-inventory/csv-file-excel.png" alt-text="Screenshot of an inventory CSV file opened in Microsoft Excel":::
An inventory job can take a longer amount of time in these cases:
An object replication policy can prevent an inventory job from writing inventory reports to the destination container. Some other scenarios can archive the reports or make the reports immutable when they're partially completed which can cause inventory jobs to fail.
-## Blob Inventory ΓÇô Multiple results files FAQ
-
-### What is the feature that has changed? What specific change was made?
-
-Blob Inventory report produces three types of files. See [Inventory files](#inventory-files). Existing customers using blob inventory might see a change in the number of inventory files, from one file to multiple files. Today, we already have manifest file which provides the list of files. This behavior remains unchanged so these files will be listed in the manifest file.
-
-### Why was the change made?
-
-The change was implemented to enhance blob inventory performance, particularly for large storage accounts containing over five million objects. Now, results are written in parallel to multiple files, eliminating the bottleneck of using a single inventory file. This change was prompted by customer feedback, as they reported difficulties in opening and working with the excessively large single inventory file.
-
-### How will this change affect me as a user?
-
-As a user, this change will have a positive impact on your experience with blob inventory runs. It is expected to enhance performance and reduce the overall running time. However, to fully benefit from this improvement, you must ensure that your code is updated to process multiple results files instead of just one. This adjustment will align your code with the new approach and optimize the handling of inventory data.
-
-### Will my existing data be affected?
-
-No, existing data is not affected. Only new blob inventory results will have multiple inventory files.
-
-### Will there be any downtime or service interruptions?
-
-No, the change will happen seamlessly.
-
-### Is there anything I need to do differently now?
-
-Your required actions depend on how you currently process blob inventory results:
-
-1. If your current processing assumes a single inventory results file, then you will need to modify your code to accommodate multiple inventory results files.
-
-2. However, if your current processing involves reading the list of results files from the manifest file, there is no need to make any changes to how you process the results. The existing approach will continue to work seamlessly with the updated feature.
-
-### Can I revert to the previous behavior if I don't like the change?
-
-This is not recommended, but it is possible. Please work through your support channels to ask to turn this feature off.
-
-### How can I provide feedback or report issues related to the changes?
-
-Please work through your current account team and support channels.
-
-### When will this change take effect?
-
-This change will start gradual rollout starting September 1st 2023.
- ## Next steps - [Enable Azure Storage blob inventory reports](blob-inventory-how-to.md)
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
description: Manage blobs with PowerShell -+ Last updated 05/02/2023 ms.devlang: powershell
storage Blob Upload Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md
Title: Upload and analyze a file with Azure Functions and Blob Storage
description: Learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Azure AI services -+ Last updated 3/11/2022 ms.devlang: csharp
storage Blobfuse2 Commands Completion Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-bash.md
Title: How to use the 'blobfuse2 completion bash' command to generate the autoco
description: Learn how to use the completion bash command to generate the autocompletion script for BlobFuse2. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Completion Fish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-fish.md
Title: How to use the 'blobfuse2 completion fish' command to generate the autoco
description: Learn how to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Completion Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-powershell.md
Title: How to use the 'blobfuse2 completion powershell' command to generate the
description: Learn how to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Completion Zsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-zsh.md
Title: How to use the 'blobfuse2 completion zsh' command to generate the autocom
description: Learn how to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion.md
Title: How to use the 'blobfuse2 completion' command to generate the autocomplet
description: Learn how to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-help.md
Title: How to use 'blobfuse2 help' to get help info for the BlobFuse2 command an
description: Learn how to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-all.md
Title: How to use the 'blobfuse2 mount all' command to mount all blob containers
description: Learn how to use the 'blobfuse2 mount all' all command to mount all blob containers in a storage account as a Linux file system. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mount List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-list.md
Title: How to use the 'blobfuse2 mount list' command to display all BlobFuse2 mo
description: Learn how to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
Title: How to use the 'blobfuse2 mount' command to mount a Blob Storage containe
description: Learn how to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mountv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mountv1.md
Title: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 con
description: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Secure Decrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-decrypt.md
Title: How to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2
description: Learn how to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Secure Encrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-encrypt.md
Title: How to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2
description: Learn how to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Secure Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-get.md
Title: How to use the 'blobfuse2 secure get' command to display the value of a p
description: Learn how to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file -+ Last updated 12/02/2022
storage Blobfuse2 Commands Secure Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-set.md
Title: How to use the 'blobfuse2 secure set' command to change the value of a pa
description: Learn how to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file -+ Last updated 12/02/2022
storage Blobfuse2 Commands Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure.md
Title: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access
description: Learn how to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
Title: How to use the 'blobfuse2 unmount all' command to unmount all blob contai
description: Learn how to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Unmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md
Title: How to use the 'blobfuse2 unmount' command to unmount an existing mount p
description: How to use the 'blobfuse2 unmount' command to unmount an existing mount point. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-version.md
Title: How to use the 'blobfuse2 version' command to get the current version and
description: Learn how to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one. -+ Last updated 12/02/2022
storage Blobfuse2 Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands.md
Title: How to use the BlobFuse2 command set
description: Learn how to use the BlobFuse2 command set to mount blob storage containers as file systems on Linux, and manage them. -+ Last updated 12/02/2022
storage Blobfuse2 Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-configuration.md
description: Learn how to configure settings for BlobFuse2.
-+ Last updated 12/02/2022
storage Blobfuse2 Health Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-health-monitor.md
description: Learn how to Use Health Monitor to gain insights into BlobFuse2 mou
-+ Last updated 12/02/2022
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
description: Learn how to mount an Azure Blob Storage container on Linux with Bl
-+ Last updated 01/26/2023
storage Blobfuse2 Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-troubleshooting.md
description: Learn how to troubleshoot issues in BlobFuse2.
-+ Last updated 12/02/2022
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
description: An overview of how to use BlobFuse to mount an Azure Blob Storage c
-+ Last updated 12/02/2022
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 08/03/2023 Last updated : 08/10/2023
With the lifecycle management policy, you can:
- Define rules to be run once per day at the storage account level. - Apply rules to containers or to a subset of blobs, using name prefixes or [blob index tags](storage-manage-find-blobs.md) as filters.
-> [!IMPORTANT]
-> The cold tier is currently in PREVIEW and is available in all public regions.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> To enroll, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
- Consider a scenario where data is frequently accessed during the early stages of the lifecycle, but only occasionally after two weeks. Beyond the first month, the data set is rarely accessed. In this scenario, hot storage is best during the early stages. Cool storage is most appropriate for occasional access. Archive storage is the best tier option after the data ages over a month. By moving data to the appropriate storage tier based on its age with lifecycle management policy rules, you can design the least expensive solution for your needs. Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. Lifecycle management doesn't affect system containers such as the `$logs` or `$web` containers.
Each update to a blob's last access time is billed under the [other operations](
For more information about pricing, see [Block Blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
-## FAQ
-
-### I created a new policy. Why do the actions not run immediately?
-
-The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run for the first time.
-
-### If I update an existing policy, how long does it take for the actions to run?
-
-The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool or cold and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob won't then move from hot to cool or cold given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
-
-### The run completes but doesn't move or delete some blobs
-
-Depending on the size and the number of objects that are in a storage account, more than one run might be required to process all of the objects. You can also check the storage resource logs to see if the operations are being performed by the lifecycle management policy.
-
-### I don't see capacity changes even though the policy is executing and deleting the blobs
-
-Check to see if data protection features such as soft delete or versioning are enabled on the storage account. Even if the policy is deleting the blobs, those blobs might still exist in a soft deleted state or as an older version depending on how these features are configured.
-
-### I rehydrated an archived blob. How do I prevent it from being moved back to the Archive tier temporarily?
-
-If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob by changing its tier can result in a scenario where the lifecycle policy moves the blob back to the archive tier. This can happen if the last modified time, creation time, or last access time is beyond the threshold set for the policy. There are three ways to prevent this from happening:
--- Add the `daysAfterLastTierChangeGreaterThan` condition to the tierToArchive action of the policy. This condition applies only to the last modified time. See [Use lifecycle management policies to archive blobs](archive-blob.md#use-lifecycle-management-policies-to-archive-blobs).--- Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. --- If the blob needs to stay in the hot, cool, or cold tier permanently, copy the blob to another location where the lifecycle manage policy isn't in effect.-
-### The blob prefix match string didn't apply the policy to the expected blobs
-
-The blob prefix match field of a policy is a full or partial blob path, which is used to match the blobs you want the policy actions to apply to. The path must start with the container name. If no prefix match is specified, then the policy will apply to all the blobs in the storage account. The format of the prefix match string is `[container name]/[blob name]`.
-
-Keep in mind the following points about the prefix match string:
--- A prefix match string like *container1/* applies to all blobs in the container named *container1*. A prefix match string of *container1*, without the trailing forward slash character (/), applies to all blobs in all containers where the container name begins with the string *container1*. The prefix will match containers named *container11*, *container1234*, *container1ab*, and so on.-- A prefix match string of *container1/sub1/* applies to all blobs in the container named *container1* that begin with the string *sub1/*. For example, the prefix will match blobs named *container1/sub1/test.txt* or *container1/sub1/sub2/test.txt*.-- The asterisk character `*` is a valid character in a blob name. If the asterisk character is used in a prefix, then the prefix will match blobs with an asterisk in their names. The asterisk doesn't function as a wildcard character.-- The question mark character `?` is a valid character in a blob name. If the question mark character is used in a prefix, then the prefix will match blobs with a question mark in their names. The question mark doesn't function as a wildcard character.-- The prefix match considers only positive (=) logical comparisons. Negative (!=) logical comparisons are ignored.-
-### Is there a way to identify the time at which the policy will be executing?
+## Frequently asked questions (FAQ)
-Unfortunately, there's no way to track the time at which the policy will be executing, as it's a background scheduling process. However, the platform will run the policy once per day.
+See [Lifecycle management FAQ](storage-blob-faq.yml#lifecycle-management-policies).
## Next steps
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
The following table lists some example scenarios to monitor and the proper metri
[!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
-## FAQ
+## Frequently asked questions (FAQ)
-**Does Azure Storage support metrics for Managed Disks or Unmanaged Disks?**
-
-No. Azure Compute supports the metrics on disks. For more information, see [Per disk metrics for Managed and Unmanaged Disks](https://azure.microsoft.com/blog/per-disk-metrics-managed-disks/).
-
-**What does a dashed line in an Azure Metric chart indicate?**
-
-Some Azure metrics charts, such as the ones that display availability and latency data, use a dashed line to indicate that there's a missing value (also known as null value) between two known time grain data points. For example, if in the time selector you picked `1 minute` time granularity, but the metric was reported at 07:26, 07:27, 07:29, and 07:30, then a dashed line connects 07:27 and 07:29 because there's a minute gap between those two data points. A solid line connects all other data points. The dashed line drops down to zero when the metric uses count and sum aggregation. For the avg, min or max aggregations, a dashed line connects the two nearest known data points. Also, when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.
-
-**How do I track availability of my storage account?**
-
-You can configure a resource health alert based on the [Azure Resource Health](../../service-health/resource-health-overview.md) service to track the availability of your storage account. If there are no transactions on the account, then the alert reports based on the health of the Storage cluster where your storage account is located.
+See [Metrics and Logs FAQ](storage-blob-faq.yml#metrics-and-logs).
## Next steps
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication isn't supported for blobs in the source account that are encr
Customer-managed failover isn't supported for either the source or the destination account in an object replication policy.
+Object replication is not supported for blobs that are uploaded to the Data Lake Storage endpoint (`dfs.core.windows.net`) by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
+ ## How object replication works Object replication asynchronously copies block blobs in a container according to rules that you configure. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container.
storage Quickstart Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-storage-explorer.md
description: Learn how to use Azure Storage Explorer to create a container and a blob, download the blob to your local computer, and view all of the blobs in the container. -+ Last updated 10/28/2021
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
description: Learn about security recommendations for Blob storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model. -+ Last updated 04/06/2023
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure Blob Storage. -+ Previously updated : 05/09/2023 Last updated : 08/10/2023
The following table summarizes the available attributes by source:
> | **Attribute** | `isPrivateLink` | > | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [Boolean](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | **Applies to** | For copy operations using the following REST operations, this attribute only applies to the destination storage account, and not the source:<br><br>[Copy Blob](/rest/api/storageservices/copy-blob)<br>[Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url)<br>[Put Blob From URL](/rest/api/storageservices/put-blob-from-url)<br>[Put Block From URL](/rest/api/storageservices/put-block-from-url)<br>[Append Block From URL](/rest/api/storageservices/append-block-from-url)<br>[Put Page From URL](/rest/api/storageservices/put-page-from-url)<br><br>For all other read, write, create, delete, and rename operations, it applies to the storage account that is the target of the operation |
> | **Examples** | `@Environment[isPrivateLink] BoolEquals true`<br/>[Example: Require private link access to read blobs with high sensitivity](storage-auth-abac-examples.md#example-require-private-link-access-to-read-blobs-with-high-sensitivity) | > | **Learn more** | [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md) |
The following table summarizes the available attributes by source:
> | **Attribute** | `Microsoft.Network/privateEndpoints` | > | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
+> | **Applies to** | For copy operations using the following REST operations, this attribute only applies to the destination storage account, and not the source:<br><br>[Copy Blob](/rest/api/storageservices/copy-blob)<br>[Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url)<br>[Put Blob From URL](/rest/api/storageservices/put-blob-from-url)<br>[Put Block From URL](/rest/api/storageservices/put-block-from-url)<br>[Append Block From URL](/rest/api/storageservices/append-block-from-url)<br>[Put Page From URL](/rest/api/storageservices/put-page-from-url)<br><br>For all other read, write, create, delete, and rename operations, it applies to the storage account that is the target of the operation |
> | **Examples** | `@Environment[Microsoft.Network/privateEndpoints] StringEqualsIgnoreCase '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example-group/providers/Microsoft.Network/privateEndpoints/privateendpoint1'`<br/>[Example: Allow read access to a container only from a specific private endpoint](storage-auth-abac-examples.md#example-allow-access-to-a-container-only-from-a-specific-private-endpoint) | > | **Learn more** | [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md) |
The following table summarizes the available attributes by source:
> | **Attribute** | `Microsoft.Network/virtualNetworks/subnets` | > | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
+> | **Applies to** | For copy operations using the following REST operations, this attribute only applies to the destination storage account, and not the source:<br><br>[Copy Blob](/rest/api/storageservices/copy-blob)<br>[Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url)<br>[Put Blob From URL](/rest/api/storageservices/put-blob-from-url)<br>[Put Block From URL](/rest/api/storageservices/put-block-from-url)<br>[Append Block From URL](/rest/api/storageservices/append-block-from-url)<br>[Put Page From URL](/rest/api/storageservices/put-page-from-url)<br><br>For all other read, write, create, delete, and rename operations, it applies to the storage account that is the target of the operation |
> | **Examples** | `@Environment[Microsoft.Network/virtualNetworks/subnets] StringEqualsIgnoreCase '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example-group/providers/Microsoft.Network/virtualNetworks/virtualnetwork1/subnets/default'`<br/>[Example: Allow access to blobs in specific containers from a specific subnet](storage-auth-abac-examples.md#example-allow-access-to-blobs-in-specific-containers-from-a-specific-subnet) | > | **Learn more** | [Subnets](../../virtual-network/concepts-and-best-practices.md) |
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
description: Add a role assignment condition to restrict access to blobs using Azure CLI and Azure attribute-based access control (Azure ABAC). -+
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
description: Example Azure role assignment conditions for Blob Storage. -+
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
description: Add a role assignment condition to restrict access to blobs using the Azure portal and Azure attribute-based access control (Azure ABAC). -+
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
description: Add a role assignment condition to restrict access to blobs using Azure PowerShell and Azure attribute-based access control (Azure ABAC). -+
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). -+ Last updated 05/09/2023
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes. -+ Last updated 04/21/2023
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
Last updated 10/14/2021-+
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
This section describes known issues and conditions in the current release of the
- Storage account failover of geo-redundant storage accounts with the change feed enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. For more information about such inconsistencies, see [Change feed and blob data inconsistencies](../common/storage-disaster-recovery-guidance.md#change-feed-and-blob-data-inconsistencies). - You might see 404 (Not Found) and 412 (Precondition Failed) errors reported on the **$blobchangefeed** and **$blobchangefeedsys** containers. You can safely ignore these errors.
-## Feature support
--
-## FAQ
+## Frequently asked questions (FAQ)
-### What is the difference between the change feed and Storage Analytics logging?
+See [Change feed support FAQ](storage-blob-faq.yml#change-feed-support).
-Analytics logs have records of all read, write, list, and delete operations with successful and failed requests across all operations. Analytics logs are best-effort and no ordering is guaranteed.
-
-The change feed is a solution that provides transactional log of successful mutations or changes to your account such as blob creation, modification, and deletions. The change feed guarantees all events to be recorded and displayed in the order of successful changes per blob, thus you do not have to filter out noise from a huge volume of read operations or failed requests. The change feed is fundamentally designed and optimized for application development that require certain guarantees.
-
-### Should I use the change feed or Storage events?
+## Feature support
-You can leverage both features as the change feed and [Blob storage events](storage-blob-event-overview.md) provide the same information with the same delivery reliability guarantee, with the main difference being the latency, ordering, and storage of event records. The change feed publishes records to the log within few minutes of the change and also guarantees the order of change operations per blob. Storage events are pushed in real time and might not be ordered. Change feed events are durably stored inside your storage account as read-only stable logs with your own defined retention, while storage events are transient to be consumed by the event handler unless you explicitly store them. With change feed, any number of your applications can consume the logs at their own convenience using blob APIs or SDKs.
-## Next steps
-- See an example of how to read the change feed by using a .NET client application. See [Process change feed logs in Azure Blob Storage](storage-blob-change-feed-how-to.md).-- Learn about how to react to events in real time. See [Reacting to Blob Storage events](storage-blob-event-overview.md)-- Learn more about detailed logging information for both successful and failed operations for all requests. See [Azure Storage analytics logging](../common/storage-analytics-logging.md)
storage Storage Blob Pageblob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-pageblob-overview.md
description: An overview of Azure page blobs and their advantages, including use
-+ Last updated 05/11/2023
storage Storage Blob Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md
description: Learn about purchasing Azure Storage reserved capacity to save cost
-+ Last updated 05/17/2021
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
To enable metrics on your static website pages, see [Enable metrics on static we
[!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
-## FAQ
+## Frequently asked questions (FAQ)
-##### Does the Azure Storage firewall work with a static website?
-
-Yes. Storage account [network security rules](../common/storage-network-security.md), including IP-based and VNET firewalls, are supported for the static website endpoint, and may be used to protect your website.
-
-##### Do static websites support Azure Active Directory (Azure AD)?
-
-No. A static website only supports anonymous public read access for files in the **$web** container.
-
-##### How do I use a custom domain with a static website?
-
-You can configure a [custom domain](./static-website-content-delivery-network.md) with a static website by using [Azure Content Delivery Network (Azure CDN)](./storage-custom-domain-name.md#map-a-custom-domain-with-https-enabled). Azure CDN provides consistent low latencies to your website from anywhere in the world.
-
-##### How do I use a custom Secure Sockets Layer (SSL) certificate with a static website?
-
-You can configure a [custom SSL](./static-website-content-delivery-network.md) certificate with a static website by using [Azure CDN](./storage-custom-domain-name.md#map-a-custom-domain-with-https-enabled). Azure CDN provides consistent low latencies to your website from anywhere in the world.
-
-##### How do I add custom headers and rules with a static website?
-
-You can configure the host header for a static website by using [Azure CDN - Verizon Premium](../../cdn/cdn-verizon-premium-rules-engine.md). We'd be interested to hear your feedback [here](https://feedback.azure.com/d365community/idea/694b08ef-3525-ec11-b6e6-000d3a4f0f84).
-
-##### Why am I getting an HTTP 404 error from a static website?
-
-A 404 error can happen if you refer to a file name by using an incorrect case. For example: `Index.html` instead of `https://docsupdatetracker.net/index.html`. File names and extensions in the url of a static website are case-sensitive even though they're served over HTTP. This can also happen if your Azure CDN endpoint isn't yet provisioned. Wait up to 90 minutes after you provision a new Azure CDN for the propagation to complete.
-
-##### Why isn't the root directory of the website not redirecting to the default index page?
-
-In the Azure portal, open the static website configuration page of your account and locate the name and extension that is set in the **Index document name** field. Ensure that this name is exactly the same as the name of the file located in the **$web** container of the storage account. File names and extensions in the url of a static website are case-sensitive even though they're served over HTTP.
+See [Static website hosting FAQ](storage-blob-faq.yml#static-website-hosting).
## Next steps
storage Storage Blob User Delegation Sas Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md
description: Learn how to create a user delegation SAS with Azure Active Directo
-+ Last updated 12/18/2019
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
description: Learn how to create a user delegation SAS with Azure Active Directo
-+ Last updated 12/18/2019
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
description: Use Azure Blob Storage to store massive amounts of unstructured obj
-+ Last updated 03/28/2023
storage Storage Blobs Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md
description: Understand and measure latency for Blob storage operations, and lea
-+ Last updated 09/05/2019
storage Storage Blobs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-overview.md
description: Azure Blob storage stores massive amounts of unstructured object da
-+ Last updated 11/04/2019
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
description: Map a custom domain to a Blob Storage or web endpoint in an Azure storage account. -+ Last updated 02/12/2021
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Title: How to mount Azure Blob Storage as a file system on Linux with BlobFuse v
description: Learn how to mount an Azure Blob Storage container with BlobFuse v1, a virtual file system driver on Linux. -+ Last updated 12/02/2022
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-manage-find-blobs.md
This section describes known issues and conditions.
- `Copy Blob` doesn't copy blob index tags from the source blob to the new destination blob. You can specify the tags you want applied to the destination blob during the copy operation.
-## FAQ
+## Frequently asked questions (FAQ)
-**Can blob index help me filter and query content inside my blobs?**
-
-No, if you need to search within your blob data, use query acceleration or Azure search.
-
-**Are there any requirements on index tag values?**
-
-Blob index tags only support string data types and querying returns results with lexicographical ordering. For numbers, zero pad the number. For dates and times, store as an ISO 8601 compliant format.
-
-**Are blob index tags and Azure Resource Manager tags related?**
-
-No, Resource Manager tags help organize control plane resources such as subscriptions, resource groups, and storage accounts. Index tags provide blob management and discovery on the data plane.
+See [Blob index tags FAQ](storage-blob-faq.yml#blob-index-tags).
## Next steps
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md
description: A checklist of proven practices for use with Blob storage in develo
-+ Last updated 06/01/2023
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
description: In this quickstart, you learn how to use the Azure CLI upload a blob to Azure Storage, download a blob, and list the blobs in a container. -+ Last updated 01/25/2023
storage Storage Quickstart Blobs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-portal.md
description: In this quickstart, you use the Azure portal in object (Blob) storage. Then you use the Azure portal to upload a blob to Azure Storage, download a blob, and list the blobs in a container. -+ Last updated 01/13/2023
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
description: In this quickstart, you use Azure PowerShell in object (Blob) storage. Then you use PowerShell to upload a blob to Azure Storage, download a blob, and list the blobs in a container. -+ Last updated 03/31/2022
storage Storage Samples Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-cli.md
Last updated 06/13/2017-+
storage Storage Samples Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-powershell.md
Last updated 11/07/2017-+
storage Versions Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versions-manage-dotnet.md
description: Learn how to use the .NET client library to create a previous versi
-+ Last updated 02/14/2023 ms.devlang: csharp
storage Nfs Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/nfs-comparison.md
Title: Compare NFS access to Azure Files, Blob Storage, and Azure NetApp Files description: Compare NFS access for Azure Files, Azure Blob Storage, and Azure NetApp Files. -+ Last updated 03/20/2023
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Last updated 08/08/2023
-+
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Last updated 07/07/2022 -+
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Last updated 08/03/2023
-+
storage Security Restrict Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-restrict-copy-operations.md
description: Learn how to use the "Permitted scope for copy operations (preview)" Azure storage account setting to limit the source accounts of copy operations to the same tenant or with private links to the same virtual network. -+ Last updated 01/10/2023
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
description: To require clients to use Azure AD to authorize requests, you can d
-+ Last updated 06/06/2023
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
description: Learn how to view, manage, and rotate your storage account access k
-+ Last updated 03/22/2023
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
The following table describes the types of storage accounts recommended by Micro
| Standard general-purpose v2 | Blob Storage (including Data Lake Storage<sup>1</sup>), Queue Storage, Table Storage, and Azure Files | Locally redundant storage (LRS) / geo-redundant storage (GRS) / read-access geo-redundant storage (RA-GRS)<br /><br />Zone-redundant storage (ZRS) / geo-zone-redundant storage (GZRS) / read-access geo-zone-redundant storage (RA-GZRS)<sup>2</sup> | Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type. | | Premium block blobs<sup>3</sup> | Blob Storage (including Data Lake Storage<sup>1</sup>) | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. [Learn more about example workloads.](../blobs/storage-blob-block-blob-premium.md) | | Premium file shares<sup>3</sup> | Azure Files | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. |
-| Premium page blobs<sup>3</sup> | Page blobs only | LRS | Premium storage account type for page blobs only. [Learn more about page blobs and sample use cases.](../blobs/storage-blob-pageblob-overview.md) |
+| Premium page blobs<sup>3</sup> | Page blobs only | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for page blobs only. [Learn more about page blobs and sample use cases.](../blobs/storage-blob-pageblob-overview.md) |
<sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob Storage. For more information, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) and [Create a storage account to use with Data Lake Storage Gen2](../blobs/create-data-lake-storage-account.md).
-<sup>2</sup> ZRS, GZRS, and RA-GZRS are available only for standard general-purpose v2, premium block blobs, and premium file shares accounts in certain regions. For more information, see [Azure Storage redundancy](storage-redundancy.md).
+<sup>2</sup> ZRS, GZRS, and RA-GZRS are available only for standard general-purpose v2, premium block blobs, premium file shares, and premium page blobs accounts in certain regions. For more information, see [Azure Storage redundancy](storage-redundancy.md).
<sup>3</sup> Premium performance storage accounts use solid-state drives (SSDs) for low latency and high throughput.
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
description: Upgrade to general-purpose v2 storage accounts using the Azure port
-+ Last updated 04/29/2021
storage Storage Choose Data Transfer Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-choose-data-transfer-solution.md
description: Learn how to choose an Azure solution for data transfer based on da
-+ Last updated 09/25/2020
storage Storage Explorer Command Line Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-command-line-options.md
Option | Description
`--auto-open-dev-tools` | Let the application open the developer tools window as soon as the browser window shows. This option is useful when you want to hit a break point at a line in the start-up code of the browser window. `--verbosity` | Set the verbosity level of Storage Explorer logging. Supported verbosity levels include `debug`, `verbose`, `info`, `warn`, `error`, and `silent`. For example, `--verbosity=verbose`. When running in production mode, the default verbosity level is `info`. When running in debug mode, the log verbosity level will always be `debug`. `--log-dir` | Set the directory to save log files. For example, `--log-dir=path_to_a_directory`.
-`--ignore-certificate-errors` | Tell Storage Explorer to ignore certificate errors. This flag can be useful when you need to work in a trusted proxy environment with non-public Certificate Authority. We recommend you to [use system proxy (preview)](./storage-explorer-network.md#use-system-proxy-preview) in such proxy environments and only set this flag if the system proxy doesn't work.
+`--ignore-certificate-errors` | Tell Storage Explorer to ignore certificate errors. This flag can be useful when you need to work in a trusted proxy environment with non-public Certificate Authority. We recommend you to [use system proxy (preview)](./storage-explorer-network.md#use-system-proxy) in such proxy environments and only set this flag if the system proxy doesn't work.
An example of starting Storage Explorer with custom command-line options
storage Storage Explorer Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-network.md
# Network connections in Storage Explorer
-When not connecting to a local emulator, Storage Explorer uses your network to make requests to your storage resources and other Azure and Microsoft services.
+Storage Explorer uses your network to make requests to your storage resources and other Azure and Microsoft services.
## Hostnames accessed by Storage Explorer
Storage Explorer makes requests to various endpoints while in use. The following
- ARM endpoints: - `management.azure.com` (global Azure) - `management.chinacloudapi.cn` (Microsoft Azure operated by 21Vianet)
- - `management.microsoftazure.de` (Azure Germany)
- `management.usgovcloudapi.net` (Azure US Government) - Login endpoints: - `login.microsoftonline.com` (global Azure)
- - `login.chinacloudapi.cn` (Azure operated by 21Vianet)
- - `login.microsoftonline.de` (Azure Germany)
+ - `login.chinacloudapi.cn` (Microsoft Azure operated by 21Vianet)
- `login.microsoftonline.us` (Azure US Government) - Graph endpoints: - `graph.windows.net` (global Azure) - `graph.chinacloudapi.cn` (Microsoft Azure operated by 21Vianet)
- - `graph.cloudapi.de` (Azure Germany)
- `graph.windows.net` (Azure US Government) - Azure Storage endpoints: - `(blob|file|queue|table|dfs).core.windows.net` (global Azure) - `(blob|file|queue|table|dfs).core.chinacloudapi.cn` (Microsoft Azure operated by 21Vianet)
- - `(blob|file|queue|table|dfs).core.cloudapi.de` (Azure Germany)
- `(blob|file|queue|table|dfs).core.usgovcloudapi.net` (Azure US Government) - Storage Explorer updating: `storageexplorerpublish.blob.core.windows.net` - Microsoft link forwarding:
Storage Explorer makes requests to various endpoints while in use. The following
## Proxy sources Storage Explorer has several options for how/where it can source the information needed to connect to your proxy. To change which option is being used, go to **Settings** (gear icon on the left vertical toolbar) > **Application** > **Proxy**. Once you are at the proxy section of settings, you can select how/where you want Storage Explorer to source your proxy settings:-- Do not use proxy-- Use environment variables-- Use app proxy settings-- Use system proxy (preview)
+- [Do not use proxy](#do-not-use-proxy)
+- [Use environment variables](#use-environment-variables)
+- [Use app proxy settings](#use-app-proxy-settings)
+- [Use system proxy](#use-system-proxy)
+
+In some situations, Storage Explorer may automatically change the proxy source and other proxy related settings. To disable this behavior, go to **Settings** (gear icon on the left vertical toolbar) > **Application** > **Proxy** > **Auto Manage Proxy Settings**. Disabling this setting will prevent Storage Explorer from changing any manually configured proxy settings.
### Do not use proxy
All settings other than credentials can be managed from either:
To set credentials, you must go to the Proxy Settings dialog (**Edit** > **Configure Proxy**).
-### Use system proxy (preview)
+### Use system proxy
When this option is selected, Storage Explorer will use your OS proxy settings. More specifically, it will result in network calls being made using the Chromium networking stack. The Chromium networking stack is much more robust than the NodeJS networking stack normally used by Storage Explorer. Here's a snippet from [Chromium's documentation](https://www.chromium.org/developers/design-documents/network-settings) on what all it can do:
If your proxy server requires credentials, and those credentials aren't configur
To set credentials, you must go to the Proxy Settings dialog (**Edit** > **Configure Proxy**).
-This option is in preview because not all features currently support system proxy. See [features that do not support system proxy](#features-that-do-not-support-system-proxy) for a complete list of features which do not support it. When system proxy is enabled, features that don't support system proxy won't make any attempt to connect to a proxy.
-
-If you come across an issue while using system proxy with a supported feature, [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues/new).
- ## Proxy server authentication If you have configured Storage Explorer to source proxy settings from **environment variables** or **app proxy settings**, then only proxy servers that use basic authentication are supported.
If you have configured Storage Explorer to use **system proxy**, then proxy serv
## Which proxy source should I choose?
-If you're using features not listed [here](#features-that-do-not-support-system-proxy), then you should first try using [**system proxy**](#use-system-proxy-preview). If you come across an issue while using system proxy with a supported feature, [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues/new).
-
-If you're using features that don't support system proxy, then [**app settings**](#use-app-proxy-settings) is probably the next best option. The GUI-based experience for configuring the proxy configuration helps reduce the chance of entering your proxy information correctly. However, if you already have proxy environment variables configured, then it might be better to use [**environment variables**](#use-environment-variables).
+You should first try using [**system proxy**](#use-system-proxy). After that, [**app settings**](#use-app-proxy-settings) is the next best option. The GUI-based experience for configuring the proxy configuration helps reduce the chance of entering your proxy information correctly. However, if you already have proxy environment variables configured, then it might be better to use [**environment variables**](#use-environment-variables).
## AzCopy proxy usage
Currently, AzCopy only supports proxy servers that use **basic authentication**.
By default, Storage Explorer uses the NodeJS networking stack. NodeJS ships with a predefined list of trusted SSL certificates. Some networking technologies, such as proxy servers or anti-virus software, inject their own SSL certificates into network traffic. These certificates are often not present in NodeJS' certificate list. NodeJS won't trust responses that contain such a certificate. When NodeJS doesn't trust a response, then Storage Explorer will receive an error. You have multiple options for resolving such errors:-- Use [**system proxy**](#use-system-proxy-preview) as your proxy source.
+- Use [**system proxy**](#use-system-proxy) as your proxy source.
- Import a copy of the SSL certificate/s causing the error/s. - Disable SSL certificate. (**not recommended**)
-## Features that do not support system proxy
-
-The following is a list of features that do not support **system proxy**:
--- Storage Account Features
- - Setting default access tier
-- Table Features
- - Manage access policies
- - Configure CORS
- - Generate SAS
- - Copy & Paste Table
- - Clone Table
-- All ADLS Gen1 features- ## Next steps - [Troubleshoot proxy issues](./storage-explorer-troubleshooting.md#proxy-issues)
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md
Some actions, such as changing the default access tier of your account, can lead
| Monitoring | Enabling Storage Analytics logs (classic logs)| Storage analytics logs can accumulate in your account over time if the retention policy is not set. Make sure to set the retention policy to avoid log buildup which can lead to unexpected capacity charges.<br><br>For more information, see [Modify log data retention period](manage-storage-analytics-logs.md#modify-log-data-retention-period) | | Protocols | Enabling SSH File Transfer Protocol (SFTP) support| Enabling the SFTP endpoint incurs an hourly cost. To avoid passive charges, consider enabling SFTP only when you are actively using it to transfer data.<br><br> For guidance about how to enable and then disable SFTP support, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md). |
-## FAQ
+## Frequently asked questions (FAQ)
-**If I use Azure Storage for only a few days a month, is the cost prorated?**
-
-Storage capacity is billed in units of the average daily amount of data stored, in gigabytes (GB), over a monthly period. For example, if you consistently used 10 GB of storage for the first half of the month, and none for the second half of the month, you would be billed for your average usage of 5 GB of storage.
+See [Managing costs FAQ](../blobs/storage-blob-faq.yml#managing-costs).
## Next steps
storage Storage Solution Large Dataset Low Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-low-network.md
description: Learn how to choose an Azure solution for data transfer when you ha
-+ Last updated 04/01/2019
storage Storage Solution Large Dataset Moderate High Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-moderate-high-network.md
description: Learn how to choose an Azure solution for data transfer when you ha
-+ Last updated 06/28/2022
storage Storage Solution Periodic Data Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-periodic-data-transfer.md
description: Learn how to choose an Azure solution for data transfer when you ar
-+ Last updated 07/21/2021
storage Storage Solution Small Dataset Low Moderate Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-small-dataset-low-moderate-network.md
description: Learn how to choose an Azure solution for data transfer when you ha
-+ Last updated 12/05/2018
storage Videos Azure Files And File Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/videos-azure-files-and-file-sync.md
If you're new to Azure Files and File Sync or looking to deepen your understandi
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/TOHaNJpAOfc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/TOHaNJpAOfc]
:::column-end::: :::column::: **How Azure Files can help protect against ransomware and accidental data loss**
If you're new to Azure Files and File Sync or looking to deepen your understandi
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/jd49W33DxkQ]
:::column-end::: :::column::: **Domain join Azure file share with on-premises Active Directory and replace your file server with Azure file share**
If you're new to Azure Files and File Sync or looking to deepen your understandi
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/bmRZi9iGsK0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/bmRZi9iGsK0]
:::column-end::: :::column::: **Mount an Azure file share in Windows**
If you're new to Azure Files and File Sync or looking to deepen your understandi
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/44qVRZg-bMA?list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/44qVRZg-bMA?list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1]
:::column-end::: :::column::: **NFS 4.1 for Azure file shares**
If you're new to Azure Files and File Sync or looking to deepen your understandi
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/V43p6qIhFkc?list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/V43p6qIhFkc?list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1]
:::column-end::: :::column::: **How to set up Azure File Sync**
If you're new to Azure Files and File Sync or looking to deepen your understandi
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/uStaB09y6TE" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/uStaB09y6TE]
:::column-end::: :::column::: **Integrating HPC Pack with Azure Files** :::column-end:::
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
The following release notes are for Azure File Sync version 16.0.0.0 (released J
### Improvements and issues that are fixed - Improved Azure File Sync service availability - Azure File Sync is now a zone-redundant service which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully leverage this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md).
- > [!Note]
- > Azure File Sync is zone-redundant in all regions that [support zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support) except US Gov Virginia.
-
- Immediately run server change enumeration to detect files changes that were missed on the server - Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files will not sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the Invoke-StorageSyncServerChangeDetection PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
description: Azure Files geo-redundancy for large file shares (preview) signific
Previously updated : 07/21/2023 Last updated : 08/13/2023
Azure Files geo-redundancy for large file shares preview is currently available
- Japan West - Korea Central - Korea South
+- North Central US
- Norway East - Norway West - South Africa North - South Africa West
+- South Central US
- South India - Southeast Asia - Sweden Central
Azure Files geo-redundancy for large file shares preview is currently available
- UAE North - UK South - UK West
+- US Gov Arizona
+- US Gov Texas
+- US Gov Virginia
- West Central US - West India - West US 2
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
description: Learn how to enable Active Directory Domain Services authentication
Previously updated : 08/02/2023 Last updated : 08/11/2023 recommendations: false
Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAcco
The cmdlets should return the key value. Once you have the kerb1 key, create either a [computer account](/powershell/module/activedirectory/new-adcomputer) or [service account](/powershell/module/activedirectory/new-adserviceaccount) in AD under your OU, and use the key as the password for the AD identity.
-1. Set the SPN to **cifs/your-storage-account-name-here.file.core.windows.net** either in the AD GUI or by running the `Setspn` command from the Windows command line as administrator (remember to replace the example text with your storage account name and `<ADAccountName>` with your AD account name):
+1. Set the SPN to **cifs/your-storage-account-name-here.file.core.windows.net** either in the AD GUI or by running the `Setspn` command from the Windows command line as administrator (remember to replace the example text with your storage account name and `<ADAccountName>` with your AD account name).
```shell Setspn -S cifs/your-storage-account-name-here.file.core.windows.net <ADAccountName> ```
-2. Set the AD account password to the value of the kerb1 key (you must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges):
+2. Modify the UPN to match the SPN for the AD object (you must have AD PowerShell cmdlets installed and execute the cmdlets in PowerShell 5.1 with elevated privileges).
+
+ ```powershell
+ Set-ADUser -Identity $UserSamAccountName -UserPrincipalName cifs/<StorageAccountName>.file.core.windows.net@<UPN suffixes>
+ ```
+
+3. Set the AD account password to the value of the kerb1 key.
```powershell Set-ADAccountPassword -Identity servername$ -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "kerb1_key_value_here" -Force)
storage Storage Blobs Container Calculate Billing Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md
description: Calculate the total size of a container in Azure Blob storage for b
-+ ms.devlang: powershell
storage Storage Blobs Container Calculate Size Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
Title: Azure CLI Script Sample - Calculate blob container size
description: Calculate the size of a container in Azure Blob storage by totaling the size of the blobs in the container. -+ ms.devlang: azurecli Last updated 03/01/2022
storage Storage Blobs Container Calculate Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-powershell.md
description: Calculate the size of a container in Azure Blob Storage by totaling
-+ ms.devlang: powershell Last updated 12/04/2019
storage Storage Blobs Container Delete By Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md
Title: Azure CLI Script Sample - Delete containers by prefix
description: Delete Azure Storage blob containers based on a container name prefix, then clean up the deployment. See help links for commands used in the script sample. -+ ms.devlang: azurecli Last updated 03/01/2022
storage Storage Blobs Container Delete By Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md
description: Read an example that shows how to delete Azure Blob storage based o
-+ ms.devlang: powershell Last updated 06/13/2017
storage Storage Common Rotate Account Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-cli.md
Title: Azure CLI Script Sample - Rotate storage account access keys
description: Create an Azure Storage account, then retrieve and rotate its account access keys. -+ ms.devlang: azurecli Last updated 03/02/2022
storage Storage Common Rotate Account Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md
description: Create an Azure Storage account, then retrieve and rotate one of it
-+ ms.devlang: powershell Last updated 12/04/2019
storage Datadobi Solution Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/datadobi/datadobi-solution-guide.md
See the following Datadobi documentation for further detail:
Datadobi has made it easy to deploy their solution in Azure to protect Azure Virtual Machines and many other Azure services. For more information, see the following reference: -- [Protect File Data in Azure with DobiProtect](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi-dobiprotect?tab=overview)
+- [Protect File Data in Azure with DobiProtect](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/partner-overview.md
This article highlights Microsoft partners that are integrated with Azure Storag
| Partner | Description | Website/product link | | - | -- | -- | |![Commvault company logo](./medi)|
-|![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiProtect helps you keep a "golden copy" of your most business-critical network attached storage (NAS) data on Azure. This helps protect against cyberthreats, ransomware, accidental deletions, and software vulnerabilities. To keep storage costs to a minimum, select just the data that you'll need when disaster strikes. When disaster does occur, recover your data entirely, restore just a subset of data, or fail over to your golden copy. |[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobiprotect?tab=Overview)|
+|![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiProtect helps you keep a "golden copy" of your most business-critical network attached storage (NAS) data on Azure. This helps protect against cyberthreats, ransomware, accidental deletions, and software vulnerabilities. To keep storage costs to a minimum, select just the data that you'll need when disaster strikes. When disaster does occur, recover your data entirely, restore just a subset of data, or fail over to your golden copy. |[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview)|
|![Rubrik company logo](./media/rubrik-logo.png) |**Rubrik**<br>Rubrik and Microsoft deliver Zero Trust Data Security solutions. These solutions keep your data safe and enable business recovery in the face of cyber attacks and operational failures. Rubrik tightly integrates with Microsoft Azure Storage to ensure your data and applications are available for rapid recovery, immutable and trusted to keep your business running without interruptions. Choose from the multiple solutions offered by Rubrik to protect your data and applications across on-premises and Microsoft Azure.|[Partner page](https://www.rubrik.com/partners/technology-partners/microsoft)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/rubrik_inc.rubrik_cloud_data_management?tab=Overview)| ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)| | ![Veeam company logo](./medi)|
To learn more about some of our other partners, see:
- [Analytics and big data partners](..\analytics\partner-overview.md) - [Container solution partners](..\container-solutions\partner-overview.md) - [Data management and migration partners](..\data-management\partner-overview.md)-- [Primary and secondary storage partners](..\primary-secondary-storage\partner-overview.md).
+- [Primary and secondary storage partners](..\primary-secondary-storage\partner-overview.md).
storage Dobimigrate Quick Start Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/dobimigrate-quick-start-guide.md
In the [Azure portal](https://portal.azure.com/) search for **support** in the
Datadobi has made it easy to deploy their solution in Azure to protect Azure Virtual Machines and many other Azure services. For more information, see the following references: -- [Migrate File Data to Azure with DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=overview)
+- [Migrate File Data to Azure with DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview)
## Next steps
Learn more by visiting our guides:
- [Storage migration overview](../../../common/storage-migration-overview.md) - [DobiMigrate User Manual](https://downloads.datadobi.com/NAS/olh/latest/dobimigrate.html) - [DobiMigrate Prerequisites Guide](https://downloads.datadobi.com/NAS/guides/latest/prerequisites.html)-- [DobiMigrate Install Guide](https://downloads.datadobi.com/NAS/guides/latest/installguide.html)
+- [DobiMigrate Install Guide](https://downloads.datadobi.com/NAS/guides/latest/installguide.html)
storage Migration Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md
The following comparison matrix shows basic functionality of different tools tha
| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | | |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) |
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) |
| **Support provided by** | Microsoft | [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| | **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes | | **Azure NetApp Files support** | No | Yes | Yes | Yes | Yes |
The following comparison matrix shows basic functionality of different tools tha
| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | | |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes | | **SMB 3.0** | Yes | Yes | Yes | Yes | Yes | | **SMB 3.1** | Yes | Yes | Yes | Yes | Yes |
The following comparison matrix shows basic functionality of different tools tha
| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | | |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
| **UID / SID remapping** | No | Yes | Yes | No | No | | **Protocol ACL remapping** | No | No | No | No | No | | **DFS Support** | Yes | Yes | Yes | Yes | No |
The following comparison matrix shows basic functionality of different tools tha
| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | | |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
| **Capacity** | No | Yes | Yes | Yes | Yes | | **# of files / folders** | No | Yes | Yes | Yes | Yes | | **Age distribution over time** | No | Yes | Yes | Yes | Yes |
The following comparison matrix shows basic functionality of different tools tha
| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | | |--|--||| |
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
| **BYOL** | N / A | Yes | Yes | Yes | Yes | | **Azure Commitment** | Yes | Yes | Yes | Yes | No |
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/partner-overview.md
This article highlights Microsoft partner companies integrated with Azure Storag
|![Cirrus company logo](./media/cirrus-logo.jpg) |**Cirrus Data**<br>Cirrus Data Solutions is a block storage data migration solution for both on-premises and cloud environments. An end-to-end approach allows you to migrate your data from on-premises to the cloud, between storage tiers within the cloud, and seamlessly migrate between public clouds. |[Partner Page](https://www.cirrusdata.com/cloud-migration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/cirrusdatasolutionsinc1618222951068.cirrus-migrate-cloud)| |![Commvault company logo](./media/commvault-logo.jpg) |**Commvault**<br>Optimize, protect, migrate, and index your data using Microsoft infrastructure with Commvault. Take control of your data with Commvault Complete Data Protection, the Microsoft-centric and, Azure-centric data management solution. Commvault provides the tools you need to manage, migrate, access, and recover your data no matter where it resides, while reducing cost and risk.|[Partner Page](https://www.commvault.com/complete-data-protection)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/commvault.commvault)| |![Data Dynamics company logo](./media/datadyn-logo.png) |**Data Dynamics**<br>Data Dynamics provides enterprise solutions to manage unstructured data for hybrid and multi-cloud environments. Their Unified Unstructured Data Management Platform uses analytics and automation to help you intelligently and efficiently move data from heterogenous storage environments (SMB, NFS, or S3 Object) into Azure. The platform provides seamless integration, enterprise scale, and performance that enables the efficient management of data for hybrid and multi-cloud environments. Use cases include: intelligent cloud migration, disaster recovery, archive, backup, and infrastructure optimization and data management. |[Partner page](https://www.datadynamicsinc.com/partners-2/)|
-![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiMigrate is enterprise-class software that gets your file and object data ΓÇô safely, quickly, easily, and cost effectively ΓÇô to Azure. Focus on value-added activities instead of time-consuming migration tasks. Grow your storage footprint without CAPEX investments.|[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview)|
+![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiMigrate is enterprise-class software that gets your file and object data ΓÇô safely, quickly, easily, and cost effectively ΓÇô to Azure. Focus on value-added activities instead of time-consuming migration tasks. Grow your storage footprint without CAPEX investments.|[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview)|
![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>Informatica’s enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)| |![Komprise company logo](./media/komprise-logo.png) |**Komprise**<br>Komprise enables visibility across silos to manage file and object data and save costs. Komprise Intelligent Data Management software lets you consistently analyze, move, and manage data across clouds.<br><br>Komprise helps you to analyze data growth across any network attached storage (NAS) and object storage to identify significant cost savings. You can also archive cold data to Azure, and runs data migrations, transparent data archiving, and data replications to Azure Files and Blob storage. Patented Komprise Transparent Move Technology enables you to archive files without changing user access. Global search and tagging enables virtual data lakes for AI, big data, and machine learning applications. |[Partner page](https://www.komprise.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview) |![Peer company logo](./media/peer-logo.png) |**Peer Software**<br>Peer Software provides real-time file management solutions for hybrid and multi-cloud environments. Key use cases include high availability for user and application data across branch offices, Azure regions and availability zones, file sharing with version integrity, and migration to file or object storage with minimal cutover downtime. |[Partner page](https://go.peersoftware.com/azure_file_management_solutions)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/peer-software-inc.peergfs?tab=overview)
stream-analytics Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/create-cluster.md
Previously updated : 05/10/2022 Last updated : 08/11/2023 # Quickstart: Create a dedicated Azure Stream Analytics cluster using Azure portal
-Use the Azure portal to create an Azure Stream Analytics cluster. A [Stream Analytics cluster](cluster-overview.md) is a single-tenant deployment that can be used for complex and demanding streaming use cases. You can run multiple Stream Analytics jobs on a Stream Analytics cluster.
+A [Stream Analytics cluster](cluster-overview.md) is a single-tenant deployment that can be used for complex and demanding streaming use cases. You can run multiple Stream Analytics jobs on a Stream Analytics cluster. This article shows you how to use the Azure portal to create an Azure Stream Analytics cluster.
## Prerequisites
In this section, you create a Stream Analytics cluster resource.
|Resource Group|Resource group name|Select a resource group, or select **Create new**, then enter a unique name for the new resource group. | |Cluster Name|A unique name|Enter a name to identify your Stream Analytics cluster.| |Location|The region closest to your data sources and sinks|Select a geographic location to host your Stream Analytics cluster. Use the location that is closest to your data sources and sinks for low latency analytics.|
- |Streaming Unit Capacity|36 through 396 |Determine the size of the cluster by estimating how many Stream Analytics job you plan to run and the total SUs the job will require. You can start with 36 SUs and later scale up or down as required.|
+ |Streaming Unit Capacity| 12 through 132 |Determine the size of the cluster by estimating how many Stream Analytics job you plan to run and the total SUs the job will require. You can start with 12 SUs and later scale up or down as required.|
- ![Create cluster](./media/create-cluster/create-cluster.png)
+ :::image type="content" source="./media/create-cluster/create-cluster.png" alt-text="Screenshot showing the Create Stream Analytics cluster page. ":::
1. Select **Review + create**. You can skip the **Tags** sections.
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
There are a few ways that you can use a subset of Azure AI services with your da
- The "Azure AI services" wizard in Synapse Analytics generates PySpark code in a Synapse notebook that connects to a with Azure AI services using data in a Spark table. Then, using pretrained machine learning models, the service does the work for you to add AI to your data. Check out [Sentiment analysis wizard](tutorial-cognitive-services-sentiment.md) and [Anomaly detection wizard](tutorial-cognitive-services-anomaly.md) for more details. -- Synapse Machine Learning ([SynapseML](https://github.com/microsoft/SynapseML)) allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources. Synapse Spark provide built-in SynapseML libraries including [synapse.ml.cognitive](https://github.com/microsoft/SynapseML/tree/master/notebooks/features/cognitive_services).
+- Synapse Machine Learning ([SynapseML](https://github.com/microsoft/SynapseML)) allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources. Synapse Spark provide built-in SynapseML libraries including synapse.ml.cognitive.
- Starting from the PySpark code generated by the wizard, or the example SynapseML code provided in the tutorial, you can write your own code to use other Azure AI services with your data. See [What are Azure AI services?](../../ai-services/what-are-ai-services.md) for more information about available services.
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
### Vision
-[**Computer Vision**](https://azure.microsoft.com/services/cognitive-services/computer-vision/)
+[**Computer Vision**](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-computer-vision/)
- Describe: provides description of an image in human readable language ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DescribeImage.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DescribeImage)) - Analyze (color, image type, face, adult/racy content): analyzes visual features of an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeImage.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeImage)) - OCR: reads text from an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/OCR.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.OCR))
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
- Recognize domain-specific content: recognizes domain-specific content (celebrity, landmark) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/RecognizeDomainSpecificContent.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.RecognizeDomainSpecificContent)) - Tag: identifies list of words that are relevant to the input image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/TagImage.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.TagImage))
-[**Face**](https://azure.microsoft.com/services/cognitive-services/face/)
+[**Face**](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-face-recognition/)
- Detect: detects human faces in an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DetectFace.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DetectFace)) - Verify: verifies whether two faces belong to a same person, or a face belongs to a person ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/VerifyFaces.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.VerifyFaces)) - Identify: finds the closest matches of the specific query person face from a person group ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/IdentifyFaces.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.IdentifyFaces))
display(
## Computer Vision sample
-[Computer Vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/) analyzes images to identify structure such as faces, objects, and natural-language descriptions. In this sample, we tag a list of images. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
+[Computer Vision](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-computer-vision/) analyzes images to identify structure such as faces, objects, and natural-language descriptions. In this sample, we tag a list of images. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
```python
synapse-analytics Synapse Machine Learning Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/synapse-machine-learning-library.md
A unified API standardizes many tools, frameworks, algorithms and streamlines th
### Use pre-built intelligent models
-Many tools in SynapseML don't require a large labeled training dataset. Instead, SynapseML provides simple APIs for pre-built intelligent services, such as Azure AI services, to quickly solve large-scale AI challenges related to both business and research. SynapseML enables developers to embed over 50 different state-of-the-art ML services directly into their systems and databases. These ready-to-use algorithms can parse a wide variety of documents, transcribe multi-speaker conversations in real time, and translate text to over 100 different languages. For more examples of how to use pre-built AI to solve tasks quickly, see [the SynapseML "cognitive" examples](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Overview/).
+Many tools in SynapseML don't require a large labeled training dataset. Instead, SynapseML provides simple APIs for pre-built intelligent services, such as Azure AI services, to quickly solve large-scale AI challenges related to both business and research. SynapseML enables developers to embed over 50 different state-of-the-art ML services directly into their systems and databases. These ready-to-use algorithms can parse a wide variety of documents, transcribe multi-speaker conversations in real time, and translate text to over 100 different languages. For more examples of how to use pre-built AI to solve tasks quickly, see [the SynapseML "cognitive" examples](https://microsoft.github.io/SynapseML/docs/Get%20Started/Set%20up%20Cognitive%20Services/).
-To make SynapseML's integration with Azure AI services fast and efficient SynapseML introduces many optimizations for service-oriented workflows. In particular, SynapseML automatically parses common throttling responses to ensure that jobs donΓÇÖt overwhelm backend services. Additionally, it uses exponential back-offs to handle unreliable network connections and failed responses. Finally, SparkΓÇÖs worker machines stay busy with new asynchronous parallelism primitives for Spark. Asynchronous parallelism allows worker machines to send requests while waiting on a response from the server and can yield a tenfold increase in throughput.
+To make SynapseML's integration with Azure AI services fast and efficient SynapseML introduces many optimizations for service-oriented workflows. In particular, SynapseML automatically parses common throttling responses to ensure that jobs don't overwhelm backend services. Additionally, it uses exponential back-offs to handle unreliable network connections and failed responses. Finally, Spark's worker machines stay busy with new asynchronous parallelism primitives for Spark. Asynchronous parallelism allows worker machines to send requests while waiting on a response from the server and can yield a tenfold increase in throughput.
### Broad ecosystem compatibility with ONNX
Bringing ONNX to Spark not only helps developers scale deep learning models, it
### Build responsible AI systems
-After building a model, itΓÇÖs imperative that researchers and engineers understand its limitations and behavior before deployment. SynapseML helps developers and researchers build responsible AI systems by introducing new tools that reveal why models make certain predictions and how to improve the training dataset to eliminate biases. SynapseML dramatically speeds the process of understanding a userΓÇÖs trained model by enabling developers to distribute computation across hundreds of machines. More specifically, SynapseML includes distributed implementations of Shapley Additive Explanations (SHAP) and Locally Interpretable Model-Agnostic Explanations (LIME) to explain the predictions of vision, text, and tabular models. It also includes tools such as Individual Conditional Expectation (ICE) and partial dependence analysis to recognized biased datasets.
+After building a model, it's imperative that researchers and engineers understand its limitations and behavior before deployment. SynapseML helps developers and researchers build responsible AI systems by introducing new tools that reveal why models make certain predictions and how to improve the training dataset to eliminate biases. SynapseML dramatically speeds the process of understanding a user's trained model by enabling developers to distribute computation across hundreds of machines. More specifically, SynapseML includes distributed implementations of Shapley Additive Explanations (SHAP) and Locally Interpretable Model-Agnostic Explanations (LIME) to explain the predictions of vision, text, and tabular models. It also includes tools such as Individual Conditional Expectation (ICE) and partial dependence analysis to recognized biased datasets.
## Enterprise support on Azure Synapse Analytics
SynapseML is generally available on Azure Synapse Analytics with enterprise supp
* To learn more about SynapseML, see the [blog post.](https://www.microsoft.com/en-us/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/)
-* [Install SynapseML and get started with examples.](https://microsoft.github.io/SynapseML/docs/getting_started/installation/)
+* [Install SynapseML and get started with examples.](https://microsoft.github.io/SynapseML/docs/Get%20Started/Install%20SynapseML/)
* [SynapseML GitHub repository.](https://github.com/microsoft/SynapseML)
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
You can now run all cells to perform anomaly detection. Select **Run All**. [Lea
- [Tutorial: Sentiment analysis with Azure AI services](tutorial-cognitive-services-sentiment.md) - [Tutorial: Machine learning model scoring in Azure Synapse dedicated SQL pools](tutorial-sql-pool-model-scoring-wizard.md) - [Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics](../../ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md)-- [SynapseML anomaly detection](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#anomaly-detection) - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
The sentiments are returned as **positive**, **negative**, **neutral**, or **mix
## Next steps - [Tutorial: Anomaly detection with Azure AI services](tutorial-cognitive-services-anomaly.md) - [Tutorial: Machine learning model scoring in Azure Synapse dedicated SQL pools](tutorial-sql-pool-model-scoring-wizard.md)-- [SynapseML text sentiment analysis](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#textsentiment) - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics Tutorial Computer Vision Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-computer-vision-use-mmlspark.md
To ensure the Spark instance is shut down, end any connected sessions(notebooks)
* [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning) * [SynapseML GitHub Repo](https://github.com/Azure/mmlspark)
-* [SynapseML documentation](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/)
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
To create your data warehouse solution, you can choose from different kinds of i
| :::image type="content" source="./media/business-intelligence/dundas_software_logo.png" alt-text="The logo of Dundas."::: |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Dundas](https://www.dundas.com/dundas-bi)<br> | | :::image type="content" source="./media/business-intelligence/cognos_analytics_logo.png" alt-text="The logo of IBM Cognos."::: |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[IBM](https://www.ibm.com/products/cognos-analytics)<br>| | :::image type="content" source="./media/business-intelligence/informationbuilders_logo.png" alt-text="The logo of Information Builders."::: |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Information Builders](https://www.ibi.com/)<br> |
-| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Logi Analytics](https://www.logianalytics.com/)<br>|
-| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Report**<br>Logi Report is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Logi Report](https://www.logianalytics.com/jreport/)<br> |
+| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Logi Analytics](https://insightsoftware.com/logi-analytics/)<br>|
+| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Report**<br>Logi Report is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Logi Report](https://insightsoftware.com/logi-analytics/logi-report/)<br> |
| :::image type="content" source="./media/business-intelligence/looker_logo.png" alt-text="The logo of Looker."::: |**Looker for Business Intelligence**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Looker for BI](https://looker.com/)<br> [Looker Analytics Platform Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> | | :::image type="content" source="./media/business-intelligence/microstrategy_logo.png" alt-text="The logo of Microstrategy."::: |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[MicroStrategy](https://www.microstrategy.com/en/business-intelligence)<br> [MicroStrategy Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud)<br> | | :::image type="content" source="./media/business-intelligence/mode-logo.png" alt-text="The logo of Mode Analytics."::: |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Mode](https://mode.com/)<br> |
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 2.4. > [!IMPORTANT]
-> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 2.4 has been announced July 29, 2022.
-> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 2.4 will be retired as of September 29, 2023. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
-> * We recommend that you upgrade your Apache Spark 2.4 workloads to version 3.2 or 3.3 at your earliest convenience.
+> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 2.4 has been announced July 29, 2022.
+> * End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 2.4 will be retired and disabled as of September 29, 2023.
+> * We recommend that you upgrade your Apache Spark 2.4 workloads to version 3.3 at your earliest convenience.
## Component versions | Component | Version |
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1. > [!IMPORTANT]
-> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 has been announced January 26, 2023.
-> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.1 will be retired as of January 26, 2024. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
-> * We recommend that you upgrade your Apache Spark 3.1 workloads to version 3.2 or 3.3 at your earliest convenience.
+> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 has been announced January 26, 2023.
+> * End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.1 will be retired and disabled as of January 26, 2024.
+> * We recommend that you upgrade your Apache Spark 3.1 workloads to version 3.3 at your earliest convenience.
## Component versions
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
-# Azure Synapse Runtime for Apache Spark 3.2
+# Azure Synapse Runtime for Apache Spark 3.2 (EOLA)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
+> [!IMPORTANT]
+> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023.
+> * End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired and disabled as of July 8, 2024.
+> * We recommend that you upgrade your Apache Spark 3.2 workloads to version 3.3 at your earliest convenience.
+ ## Component versions | Component | Version |
synapse-analytics Apache Spark Delta Lake Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md
For more information, see [Delta Lake Project](https://github.com/delta-io/delta
## Next steps
-* [.NET for Apache Spark documentation](/dotnet/spark)
+* [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
* [Azure Synapse Analytics](../index.yml)
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
Using the following keystroke shortcuts, you can more easily navigate and run co
- [Quickstart: Create an Apache Spark pool in Azure Synapse Analytics using web tools](../quickstart-apache-spark-notebook.md) - [What is Apache Spark in Azure Synapse Analytics](apache-spark-overview.md) - [Use .NET for Apache Spark with Azure Synapse Analytics](spark-dotnet.md)-- [.NET for Apache Spark documentation](/dotnet/spark)
+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
- [Azure Synapse Analytics](../index.yml)
synapse-analytics Apache Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-history-server.md
Input/output data using Resilient Distributed Datasets (RDDs) doesn't show in th
## Next steps - [Azure Synapse Analytics](../overview-what-is.md)-- [.NET for Apache Spark documentation](/dotnet/spark)
+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
synapse-analytics Apache Spark Machine Learning Mllib Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook.md
After you finish running the application, shut down the notebook to release the
## Next steps -- [.NET for Apache Spark documentation](/dotnet/spark)
+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
- [Azure Synapse Analytics](../index.yml) - [Apache Spark official documentation](https://spark.apache.org/docs/2.4.5/)
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
This document is also available in notebook form, for [Python](https://github.co
## Setup >[!Note]
-> Hyperspace is supported in Azure Synapse Runtime for Apache Spark 2.4 (EOLA), Azure Synapse Runtime for Apache Spark 3.1 (EOLA), and Azure Synapse Runtime for Apache Spark 3.2 (GA). However, it should be noted that Hyperspace is not supported in Azure Synapse Runtime for Apache Spark 3.3.
+> Hyperspace is supported in Azure Synapse Runtime for Apache Spark 2.4 (EOLA), Azure Synapse Runtime for Apache Spark 3.1 (EOLA), and Azure Synapse Runtime for Apache Spark 3.2 (EOLA). However, it should be noted that Hyperspace is not supported in Azure Synapse Runtime for Apache Spark 3.3 (GA).
To begin with, start a new Spark session. Since this document is a tutorial merely to illustrate what Hyperspace can offer, you will make a configuration change that allows us to highlight what Hyperspace is doing on small datasets.
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
To connect to other linked services, you can make a direct call to the TokenLibr
%%spark // retrieve connectionstring from mssparkutils
-mssparkutils.getFullConnectionString("<LINKED SERVICE NAME>")
+mssparkutils.credentials.getFullConnectionString("<LINKED SERVICE NAME>")
``` ::: zone-end
mssparkutils.getFullConnectionString("<LINKED SERVICE NAME>")
%%pyspark # retrieve connectionstring from mssparkutils
-mssparkutils.getFullConnectionString("<LINKED SERVICE NAME>")
+mssparkutils.credentials.getFullConnectionString("<LINKED SERVICE NAME>")
``` ::: zone-end
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The following table lists the runtime name, Apache Spark version, and release da
| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date | |-|-|-|-|-|
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | - | - |
-| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | GA | July 8, 2023 | July 8, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Nov 17, 2023 | Nov 17, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 |
| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 | | [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life Announced (EOLA)__ | __July 29, 2022__ | __September 29, 2023__ |
synapse-analytics Apache Spark What Is Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md
For more information, see [Delta Lake Project](https://github.com/delta-io/delta
## Next steps -- [.NET for Apache Spark documentation](/dotnet/spark)
+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
- [Azure Synapse Analytics](../index.yml)
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
It provides .NET bindings for Spark, which allows you to access Spark APIs throu
You can analyze data with .NET for Apache Spark through Spark batch job definitions or with interactive Azure Synapse Analytics notebooks. In this article, you learn how to use .NET for Apache Spark with Azure Synapse using both techniques. >[!IMPORTANT]
-> The [.NET for Apache Spark](https://github.com/dotnet/spark) is an open-source project under the .NET Foundation that currently requires the .NET 3.1 library, which has reached the out-of-support status. We would like to inform users of Azure Synapse Spark of the removal of the .NET for Apache Spark library in the Azure Synapse Runtime for Apache Spark version 3.3. Users may refer to the [.NET Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for more details on this matter.
+> The [.NET for Apache Spark](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet) is an open-source project under the .NET Foundation that currently requires the .NET 3.1 library, which has reached the out-of-support status. We would like to inform users of Azure Synapse Spark of the removal of the .NET for Apache Spark library in the Azure Synapse Runtime for Apache Spark version 3.3. Users may refer to the [.NET Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for more details on this matter.
> > As a result, it will no longer be possible for users to utilize Apache Spark APIs via C# and F#, or execute C# code in notebooks within Synapse or through Apache Spark Job definitions in Synapse. It is important to note that this change affects only Azure Synapse Runtime for Apache Spark 3.3 and above. >
Dotnet Spark 1.0.0 uses a different debug architecture than 1.1.1+. You will hav
## Next steps
-* [.NET for Apache Spark documentation](/dotnet/spark/)
+* [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
* [.NET for Apache Spark Interactive guides](/dotnet/spark/how-to-guides/dotnet-interactive-udf-issue) * [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) * [.NET Interactive](https://devblogs.microsoft.com/dotnet/creating-interactive-net-documentation/)
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
REST APIs for managing compute for dedicated SQL pool (formerly SQL DW) in Azure
## Scale compute
-To change the data warehouse units, use the [Create or Update Database](/rest/api/sql/databases/createorupdate) REST API. The following example sets the data warehouse units to DW1000 for the database MySQLDW, which is hosted on server MyServer. The server is in an Azure resource group named ResourceGroup1.
+To change the data warehouse units, use the [Create or Update Database](/rest/api/sql/2022-08-01-preview/databases/create-or-update) REST API. The following example sets the data warehouse units to DW1000 for the database `MySQLDW`, which is hosted on server MyServer. The server is in an Azure resource group named ResourceGroup1.
``` PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Sql/servers/{server-name}/databases/{database-name}?api-version=2020-08-01-preview HTTP/1.1
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
MODIFY (SERVICE_OBJECTIVE = 'DW1000c')
### REST APIs
-To change the DWUs, use the [Create or Update Database](/rest/api/sql/databases/createorupdate) REST API. The following example sets the service level objective to DW1000c for the database MySQLDW, which is hosted on server MyServer. The server is in an Azure resource group named ResourceGroup1.
+To change the DWUs, use the [Create or Update Database](/rest/api/sql/2022-08-01-preview/databases/create-or-update) REST API. The following example sets the service level objective to DW1000c for the database `MySQLDW`, which is hosted on server MyServer. The server is in an Azure resource group named ResourceGroup1.
``` PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Sql/servers/{server-name}/databases/{database-name}?api-version=2014-04-01-preview HTTP/1.1
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
description: Learn about the new features and documentation improvements for Azu
Previously updated : 07/21/2023 Last updated : 08/01/2023
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
Title: Deploy updates and track results in update management center (preview). description: The article details how to use update management center (preview) in the Azure portal to deploy updates and view results for supported machines. Previously updated : 05/31/2023 Last updated : 08/08/2023
After your scheduled deployment starts, you can see it's status on the **History
:::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot showing updates history." lightbox="./media/deploy-updates/updates-history-expanded.png":::
-A list of the deployments created are show in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid.
+> [!NOTE]
+> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update management center (preview)** > **Manage** > **History**.
+
+A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid.
Select any one of the update deployments from the list to open the **Update deployment run** page. Here, it shows a detailed breakdown of the updates and the installation results for the Azure VM or Arc-enabled server.
virtual-desktop Tag Virtual Desktop Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tag-virtual-desktop-resources.md
Like with the [general suggestions](#suggested-tags-for-azure-virtual-desktop),
### Use the cm-resource-parent tag to automatically group costs by host pool
-You can group costs by host pool by using the cm-resource-parent tag. This tag won't impact billing but will let you review tagged costs in Microsoft Cost Management without having to use filters. The key for this tag is **cm-resource-parent** and its value is the resource ID of the Azure resource you want to group costs by. For example, you can group costs by host pool by entering the host pool resource ID as the value. To learn more about how to use this tag, see [Group related resources in the cost analysis (preview)](../cost-management-billing/costs/enable-preview-features-cost-management-labs.md#group-related-resources-in-the-cost-analysis-preview).
+You can group costs by host pool by using the cm-resource-parent tag. This tag won't impact billing but will let you review tagged costs in Microsoft Cost Management without having to use filters. The key for this tag is **cm-resource-parent** and its value is the resource ID of the Azure resource you want to group costs by. For example, you can group costs by host pool by entering the host pool resource ID as the value. To learn more about how to use this tag, see [Group related resources in the cost analysis (preview)](../cost-management-billing/costs/group-filter.md#group-related-resources-in-the-resources-view).
## Suggested tags for other Azure Virtual Desktop resources
virtual-desktop Tutorial Create Connect Personal Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-create-connect-personal-desktop.md
description: This tutorial shows you how to deploy Azure Virtual Desktop with a
Previously updated : 08/03/2023 Last updated : 08/14/2023 # Tutorial: Create and connect to a Windows 11 desktop with Azure Virtual Desktop
You'll need:
- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- The account must be assigned the *Owner* or *Contributor* built-in [role-based access control (RBAC) roles](../role-based-access-control/role-assignments-portal.md) on the subscription.
+- The account must be assigned the *Owner* or *Contributor* built-in role-based access control (RBAC) role on the subscription, or on an resource group. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- A [virtual network](../virtual-network/quick-create-portal.md) in the same Azure region you want to deploy your session hosts to.
To create a personal host pool, workspace, application group, and session host V
Once you've completed this tab, select **Next: Networking**.
-1. On the **Networking** tab, select **Enable public access from all networks**, where end users can access the feed and session hosts securely over the public internet or the private endpoints. Once you've completed this tab, select **Next: Virtual Machines**.
+1. On the **Networking** tab, select **Enable public access from all networks**, where end users can access the feed and session hosts securely over the public internet. Once you've completed this tab, select **Next: Virtual Machines**.
1. On the **Virtual machines** tab, complete the following information:
To create a personal host pool, workspace, application group, and session host V
| Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. | | Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. | | Availability options | Select **No infrastructure dependency required**. This means that your session host VMs won't be deployed in an availability set or in availability zones. |
- | Security type | Select **Standard**. |
+ | Security type | Select **Trusted launch virtual machines**. Leave the subsequent defaults of **Enable secure boot** and **Enable vTPM** checked, and **Integrity monitoring** unchecked. For more information, see [Trusted launch](security-guide.md#trusted-launch). |
| Image | Select **Windows 11 Enterprise, version 22H2**. | | Virtual machine size | Accept the default SKU. If you want to use a different SKU, select **Change size**, then select from the list. | | Number of VMs | Enter **1** as a minimum. You can deploy up to 400 session host VMs at this point if you wish, or you can add more later.<br /><br />With a personal host pool, each session host can only be assigned to one user, so you'll need one session host for each user connecting to this host pool. Once you've completed this tutorial, you can create a pooled host pool, where multiple users can connect to the same session host. | | OS disk type | Select **Premium SSD** for best performance. | | Boot Diagnostics | Select **Enable with managed storage account (recommended)**. | | **Network and security** | |
- | Virtual network | Select your virtual network. |
+ | Virtual network | Select your virtual network and subnet to connect session hosts to. |
| Network security group | Select **Basic**. |
- | Public inbound ports | Select **No**. |
+ | Public inbound ports | Select **No** as you don't need to open inbound ports to connect to Azure Virtual Desktop. Learn more at [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md). |
| **Domain to join** | | | Select which directory you would like to join | Select **Azure Active Directory**. | | Enroll VM with Intune | Select **No.** |
To create a personal host pool, workspace, application group, and session host V
| Password | Enter a password for the local administrator account. | | Confirm password | Re-enter the password. | | **Custom configuration** | |
- | ARM template file URL | Leave this blank. |
- | ARM template parameter file URL | Leave this blank. |
+ | Custom configuration script URL | Leave this blank. |
Once you've completed this tab, select **Next: Workspace**.
To create a personal host pool, workspace, application group, and session host V
| Register desktop app group | Select **Yes**. This registers the default desktop application group to the selected workspace. | | To this workspace | Select **Create new** and enter a name, for example **aad-ws01**. |
- Once you've completed this tab, select **Next: Review + create**.
+ Once you've completed this tab, select **Next: Review + create**. You don't need to complete the other tabs.
1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment. If validation doesn't pass, review the error message and check what you entered in each tab.
virtual-desktop Tenant Setup Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md
>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. >[!IMPORTANT]
->Starting July 28, 2022, you'll no longer be able to create new tenants in Azure Virtual Desktop (classic). You can still manage your existing Azure Virtual Desktop (classic) environments including adding new session hosts, but all new environments must be done in Azure Virtual Desktop.
->
>You can find more information about how to migrate from Azure Virtual Desktop (classic) to Azure Virtual Desktop at [Migrate automatically from Azure Virtual Desktop (classic)](../automatic-migration.md). >
->Learn about how to create a host pool in Azure Virtual Desktop at [Tutorial: Create a host pool](../create-host-pools-azure-marketplace.md).
+>Try Azure Virtual Desktop by following our [Tutorial: Create and connect to a Windows 11 desktop with Azure Virtual Desktop](../tutorial-create-connect-personal-desktop.md).
Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more application groups that are used to publish desktop and application resources to users. With a tenant, you can build host pools, create application groups, assign users, and make connections through the service.
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md
For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubu
- Additionally, details on what's included in the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094). > [!NOTE]
-> Among the CentOS-HPC VM images, currently only the version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). CentOS 7 is currently the only supported CentOS version, which will continue to receive community security patches and bug fix updates until June 2024. Therefore, we are not releasing any new CentOS HPC images to Azure marketplace. You can still use our CentOS HPC version 7.9 images, but it is suggested to consider moving to our AlmaLinux HPC images alternatives in Azure marketplace, as it has the same set of drivers installed as Ubuntu/CentOS.
+> Among the CentOS-HPC VM images, currently only the version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). CentOS 7 is currently the only supported CentOS version, which will continue to receive community security patches and bug fix updates until June 2024. Therefore, we are not releasing any new CentOS HPC images to Azure marketplace. You can still use our CentOS HPC version 7.9 images, but it is suggested to consider moving to our AlmaLinux HPC images alternatives in Azure marketplace, which have the same set of drivers installed as Ubuntu/CentOS.
> [!NOTE] > SR-IOV enabled N-series VM sizes with FDR InfiniBand (e.g. NCv3 and older) will be able to use the following CentOS-HPC VM image or older versions from the Marketplace:
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
++ ```azurecli #resource group that contains the managed disk
Follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the VM from the list of **Virtual machines**. 1. If the VM isn't stopped, select **Stop** at the top of the VM **Overview** pane, and wait for the VM to stop.
-1. In the pane for the VM, select **Disks** under **Settings**.
+1. In the pane for the VM, select **Disks** from the menu.
1. Select the disk that you want to convert.
-1. Select **Size + performance** under **Settings**.
-1. Change the **Storage type** from the original disk type to the desired disk type.
+1. Select **Size + performance** from the menu.
+1. Change the **Account type** from the original disk type to the desired disk type.
1. Select **Save**, and close the disk pane. The disk type conversion is instantaneous. You can start your VM after the conversion.
The disk type conversion is instantaneous. You can start your VM after the conve
## Migrate to Premium SSD v2 or Ultra Disk
-Currently, you can only migrate an existing disk to either an Ultra Disk or a Premium SSD v2 through snapshots. Both Premium SSD v2 disks and Ultra Disks have their own set of restrictions. For example, neither can be used as an OS disk, and also aren't available in all regions. See the [Premium SSD v2 limitations](disks-deploy-premium-v2.md#limitations) and [Ultra Disk GA scope and limitations](disks-enable-ultra-ssd.md#ga-scope-and-limitations) sections of their articles for more information.
+Currently, you can only migrate an existing disk to either an Ultra Disk or a Premium SSD v2 through snapshots stored on Standard Storage. Migration with snapshots stored on Premium storage is not supported.
+
+Both Premium SSD v2 disks and Ultra Disks have their own set of restrictions. For example, neither can be used as an OS disk, and also aren't available in all regions. See the [Premium SSD v2 limitations](disks-deploy-premium-v2.md#limitations) and [Ultra Disk GA scope and limitations](disks-enable-ultra-ssd.md#ga-scope-and-limitations) sections of their articles for more information.
> [!IMPORTANT] > When migrating a Standard HDD, Standard SSD, or Premium SSD to either an Ultra Disk or Premium SSD v2, the logical sector size must be 512.
The following steps assume you already have a snapshot. To learn how to create o
Make a read-only copy of a VM by using a [snapshot](snapshot-copy-managed-disk.md). +
virtual-machines Disks Enable Customer Managed Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-customer-managed-keys-portal.md
Title: Azure portal - Enable customer-managed keys with SSE - managed disks
description: Enable customer-managed keys on your managed disks through the Azure portal. Previously updated : 08/02/2023 Last updated : 02/22/2023
The VM deployment process is similar to the standard deployment process, the onl
:::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-stop-vm-to-encrypt-disk-fix.png" alt-text="Screenshot of the main overlay for your example VM, with the Stop button highlighted." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-stop-vm-to-encrypt-disk-fix.png":::
-1. After the VM has finished stopping, select **Disks** under **Settings**, and then select the disk you want to encrypt.
+1. After the VM has finished stopping, select **Disks**, and then select the disk you want to encrypt.
:::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-existing-disk-select.png" alt-text="Screenshot of your example VM, with the Disks pane open, the OS disk is highlighted, as an example disk for you to select." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-existing-disk-select.png":::
-1. Select **Encryption** under **Settings**.
-1. Under **Key management** select your key vault and key in the drop-down list, under **Customer-managed key**.
+1. Select **Encryption** and under **Key management** select your key vault and key in the drop-down list, under **Customer-managed key**.
1. Select **Save**. :::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-encrypt-existing-disk-customer-managed-key.png" alt-text="Screenshot of your example OS disk, the encryption pane is open, encryption at rest with a customer-managed key is selected, as well as your example Azure Key Vault." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-encrypt-existing-disk-customer-managed-key.png":::
virtual-machines Disks Enable Double Encryption At Rest Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-double-encryption-at-rest-portal.md
Title: Enable double encryption at rest - Azure portal - managed disks
description: Enable double encryption at rest for your managed disk data using the Azure portal. Previously updated : 08/02/2023 Last updated : 02/06/2023
Double encryption at rest isn't currently supported with either Ultra Disks or P
:::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/disk-encryption-notification-success.png" alt-text="Screenshot of successful permission and role assignment for your key vault." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/disk-encryption-notification-success.png"::: 1. Navigate to your disk.
-1. Select **Encryption** under **Settings**.
+1. Select **Encryption**.
1. For **Key management**, select one of the keys under **Platform-managed and customer-managed keys**. 1. select **Save**.
virtual-machines Disks Enable Host Based Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-host-based-encryption-portal.md
description: Use encryption at host to enable end-to-end encryption on your Azur
Previously updated : 08/02/2023 Last updated : 03/28/2023
You've now deployed a VM with encryption at host enabled using customer-managed
Deallocate your VM first, encryption at host can't be disabled unless your VM is deallocated.
-1. On your VM, select **Disks** under **Settings**, and then select **Additional settings**.
+1. On your VM, select **Disks** and then select **Additional settings**.
:::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-host-based-encryption-additional-settings.png" alt-text="Screenshot of the Disks pane on a VM, Additional Settings is highlighted.":::
virtual-machines Disks Enable Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-performance.md
description: Increase the performance of Azure Premium SSDs and Standard SSD/HDD
Previously updated : 08/01/2023 Last updated : 03/14/2023
virtual-machines Disks Enable Private Links For Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-private-links-for-import-export-portal.md
description: Enable Private Link for your managed disks with Azure portal. This
Previously updated : 08/02/2023 Last updated : 03/31/2023
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 06/07/2023 Last updated : 08/11/2023 ms.devlang: azurecli
virtual-machines Disks Performance Tiers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-performance-tiers-portal.md
description: Learn how to change performance tiers for new and existing managed
Previously updated : 08/01/2023 Last updated : 08/30/2022
A disk's performance tier can be changed without downtime, so you don't have to
1. Navigate to the VM containing the disk you'd like to change. 1. Select your disk
-1. Select **Size + Performance** under **Settings**.
+1. Select **Size + Performance**.
1. In the **Performance tier** dropdown, select a tier other than the disk's current performance tier. 1. Select **Resize**.
virtual-machines Disks Performance Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-performance-tiers.md
description: Learn how to change performance tiers for existing managed disks us
Previously updated : 08/01/2023 Last updated : 05/23/2023
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
Title: Azure VM Image Builder overview
description: In this article, you learn about VM Image Builder for virtual machines in Azure. Previously updated : 05/30/2023 Last updated : 07/31/2023
VM Image Builder supports the following Azure Marketplace base operating system
> [!NOTE] > You can now use the Azure Image Builder service inside the portal as of March 2023. [Get started](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal.
+## Confidential VM and Trusted Launch Support
+
+VM Image Builder has extended support for TrustedLaunchSupported and ConfidentialVMSupported images, with certain constraints. Below is the list of constraints:
+
+| SecurityType | Support status |
+|--|-|
+| TrustedLaunchSupported | Support as a source image for image builds |
+| ConfidentialVMSupported | Support as a source image for image builds |
+| TrustedLaunch | Not supported as a source image |
+| ConfidentialVM | Not supported as a source image |
+
+> [!NOTE]
+> When using TrustedLaunchSupported images, it's important that the source and distribute must both be TrustedLaunchSupported for it to be supported. If the source is normal and the distribute is TrustedLaunchSupported, or if the source is TrustedLaunchSupported and the distribute is normal Gen2, it's not supported.
+ ## How it works VM Image Builder is a fully managed Azure service that's accessible to Azure resource providers. Resource providers configure it by specifying a source image, a customization to perform, and where the new image is to be distributed. A high-level workflow is illustrated in the following diagram:
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
Data | Description | Version introduced |
| `diskSizeGB` | Size of the disk in GB | 2019-06-01 | `encryptionSettings` | Encryption settings for the disk | 2019-06-01 | `image` | Source user image virtual hard disk | 2019-06-01
-| `isSharedDisk`ΓÇáΓÇá | Identifies if the disk is shared between resources | 2021-05-01
+| `isSharedDisk`* | Identifies if the disk is shared between resources | 2021-05-01
| `isUltraDisk` | Identifies if the data disk is an Ultra Disk | 2021-05-01 | `lun` | Logical unit number of the disk | 2019-06-01 | `managedDisk` | Managed disk parameters | 2019-06-01
Data | Description | Version introduced |
| `vhd` | Virtual hard disk | 2019-06-01 | `writeAcceleratorEnabled` | Whether or not writeAccelerator is enabled on the disk | 2019-06-01
-ΓÇáΓÇá These fields are only populated for Ultra Disks; they are empty strings from non-Ultra Disks.
+*These fields are only populated for Ultra Disks; they are empty strings from non-Ultra Disks.
The encryption settings blob contains data about how the disk is encrypted (if it's encrypted):
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
The following example uses `parted` on `/dev/sdc`, which is where the first data
```bash sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
-sudo partprobe /dev/sdc1
+sudo partprobe /dev/sdc
sudo mkfs.xfs /dev/sdc1 ```
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md
Previously updated : 07/28/2023 Last updated : 08/09/2023
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/detach-disk.md
Previously updated : 01/09/2023 Last updated : 08/09/2023
In select regions, the disk detach latency has been reduced, so you'll see an im
## Detach a data disk using the portal 1. In the left menu, select **Virtual Machines**.
-1. In the virtual machine blade, select **Disks** under **Settings**.
-1. In the **Disks** blade, to the far right of the data disk that you would like to detach, select the detach option, and detach the disk.
-1. After the disk has been removed, select **Save**.
+1. In the virtual machine blade, select **Disks**.
+1. In the **Disks** blade, to the far right of the data disk that you would like to detach, select the detach button, to detach the disk.
+1. After the disk has been removed, select **Save** on the top of the blade.
-The disk stays in storage but is no longer attached to a virtual machine. The disk isn't deleted.
+The disk stays in storage but is no longer attached to a virtual machine. The disk is not deleted.
## Next steps If you want to reuse the data disk, you can just [attach it to another VM](add-disk.md).
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
The `optimize` property can be enabled while creating a VM image and allows VM o
```json "optimize": { - "vmboot": { - "state": "Enabled" -
- }
-
+ }
} ```
optimize: {
```
+- **vmboot**: A configuration related to the booting process of the virtual machine (VM), used to control optimizations that can improve boot time or other performance aspects.
+- state: The state of the boot optimization feature within `vmboot`, with the value `Enabled` indicating that the feature is turned on to improve image creation time.
## Properties: source
virtual-machines Image Builder Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-networking.md
Title: Azure VM Image Builder networking options
description: Understand the networking options available to you when you deploy the Azure VM Image Builder service. - Previously updated : 08/10/2020+ Last updated : 07/25/2023
If you use an existing virtual network, VM Image Builder deploys an additional V
> The virtual network must be in the same region as the VM Image Builder service region. >
+> [!IMPORTANT]
+> The Azure VM Image Builder service modifies the WinRM connection configuration on all Windows builds to use HTTPS on port 5986 instead of the default HTTP port on 5985. This configuration change can impact workflows that rely on WinRM communication.
+ ### Why deploy a proxy VM? When a VM without a public IP is behind an internal load balancer, it doesn't have internet access. The load balancer used for the virtual network is internal. The proxy VM allows internet access for the build VM during builds. You can use the associated network security groups to restrict the build VM access.
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
The following diagram depicts real-time allocation of bandwidth and IOPS for dis
The first IO path is the uncached managed disk path. This path is taken if you are using a managed disk and set the host caching to none. An IO using this path will execute based on disk-level provisioning and then VM network-level provisioning for IOPs and throughput.
-The second IO Path is the cached managed disk path. Cached managed disk IO uses an SSD close to the VM, which has its own own IOPs and throughput provisioned, and is labeled SSD-level provisioning in the diagram. When a cached managed disk initiates a read, the request first checks to see if the data is in the server SSD. If the data isn't present, this created a cached miss and the IO then executes based on SSD-level provisioning, disk-level provisioning and then VM network-level provisioning for IOPs and throughput. When the server SSD initiates reads on cached IO that are present on the server SSD, it creates a cache hit and the IO will then execute based on the SSD-level provisioning. Writes initiated by a cached managed disk always follow the path of a cached-miss, and need to go through SSD-level, disk-level, and VM network-level provisioning.
+The second IO Path is the cached managed disk path. Cached managed disk IO uses an SSD close to the VM, which has its own IOPs and throughput provisioned, and is labeled SSD-level provisioning in the diagram. When a cached managed disk initiates a read, the request first checks to see if the data is in the server SSD. If the data isn't present, this created a cached miss and the IO then executes based on SSD-level provisioning, disk-level provisioning and then VM network-level provisioning for IOPs and throughput. When the server SSD initiates reads on cached IO that are present on the server SSD, it creates a cache hit and the IO will then execute based on the SSD-level provisioning. Writes initiated by a cached managed disk always follow the path of a cached-miss, and need to go through SSD-level, disk-level, and VM network-level provisioning.
Finally, the third path is for the local/temp disk. This is available only on VMs that support local/temp disks. An IO using this path will execute based on SSD-Level Provisioning for IOPs and throughput.
virtual-machines Troubleshooting Shared Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/troubleshooting-shared-images.md
If you have problems performing any operations on Azure Compute Gallery (formerl
> | Duplicate regions are not allowed in target publishing regions | A region is listed among the publishing regions more than once | Remove the duplicate region | > | Adding new Data Disks or changing the LUN of a Data Disk in an existing Image is not allowed | An update call to the image version either contains a new data disk or has a new LUN for a disk | Use the LUNs and data disks of the existing image version | > | The disk encryption set \<diskEncryptionSetID\> must be in the same subscription \<subscriptionID\> as the gallery resource | The Azure Compute Gallery does not currently support using a disk encryption set in a different subscription | Create the image version and disk encryption set in the same subscription |
-> | Replication failed in this region due to 'The GalleryImageVersion source resource size 2048 exceeds the max size 1024 supported | A data disk in the source is greater than 1TB | Resize the data disk to under 1 TB |
+> | Replication failed in this region due to 'The GalleryImageVersion source resource size 2520 exceeds the max size 2048 supported | A data disk in the source is greater than 2TB | Resize the data disk to under 2 TB |
> | Operation 'Update Gallery Image Version' is not on allowed on \<versionNumber>; since it is marked for deletion. You can only retry the Delete operation (or wait for an ongoing one to complete) | You attempted to update a gallery image version that is in the process of being deleted | Wait for the deletion event to complete and recreate the image version again | > | Encryption is not supported for source resource '\<sourceID>'. Please use a different source resource type which supports encryption or remove encryption properties | Currently the Azure Compute Gallery only supports encryption for VMs, disks, snapshots and managed images. One of the sources provided for the image version is not in the previous list of sources that support encryption | Remove the disk encryption set from the image version and contact the support team.
virtual-machines Attach Managed Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/attach-managed-disk-portal.md
Previously updated : 07/28/2023 Last updated : 02/06/2020
This article shows you how to attach a new managed data disk to a Windows virtua
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Virtual machines**. 1. Select a virtual machine from the list.
-1. On the **Virtual machine** pane, select **Disks** under **Settings**.
+1. On the **Virtual machine** pane, select **Disks**.
1. On the **Disks** pane, select **Create and attach a new disk**. 1. In the drop-downs for the new disk, make the selections you want, and name the disk. 1. Select **Save** to create and attach the new data disk to the VM.
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/detach-disk.md
Previously updated : 07/28/2023 Last updated : 08/09/2023 # How to detach a data disk from a Windows virtual machine
You can *hot* remove a data disk, but make sure nothing is actively using the di
1. In the left menu, select **Virtual Machines**. 1. Select the virtual machine that has the data disk you want to detach. 1. Under **Settings**, select **Disks**.
-1. In the **Disks** pane, to the far right of the data disk that you would like to detach, select the detach option, and detach the disk.
-1. Select **Save** to save your changes.
+1. In the **Disks** pane, to the far right of the data disk that you would like to detach, select the detach button to detach.
+1. Select **Save** on the top of the page to save your changes.
The disk stays in storage but is no longer attached to a virtual machine. The disk isn't deleted.
virtual-machines Quick Cluster Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-cluster-create-terraform.md
This article shows you how to create a Windows VM cluster (containing three Wind
## Implement the Terraform code > [!NOTE]
-> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/UserStory89540/quickstart/101-vm-cluster-windows). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/UserStory89540/quickstart/101-vm-cluster-windows/TestRecord.md).
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-cluster-windows). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-cluster-windows/TestRecord.md).
> > See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
Title: 'Tutorial: Restrict access to PaaS resources with service endpoints - Azure portal' description: In this tutorial, you learn how to limit and restrict network access to Azure resources, such as an Azure Storage, with virtual network service endpoints using the Azure portal. -
-tags: azure-resource-manager
- - Previously updated : 06/29/2022 Last updated : 08/08/2023 # Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account.
In this tutorial, you learn how to:
This tutorial uses the Azure portal. You can also complete it using the [Azure CLI](tutorial-restrict-network-access-to-resources-cli.md) or [PowerShell](tutorial-restrict-network-access-to-resources-powershell.md).
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- ## Prerequisites -- An Azure subscription
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
-## Create a virtual network
-
-1. From the Azure portal menu, select **+ Create a resource**.
-
-1. Search for *Virtual Network*, and then select **Create**.
-
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-resources.png" alt-text="Screenshot of search for virtual network in create a resource page.":::
-
-1. On the **Basics** tab, enter the following information and then select **Next: IP Addresses >**.
-
- | Setting | Value |
- |-|-|
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and enter *myResourceGroup*.|
- | Name | Enter *myVirtualNetwork*. |
- | Region | Select **East US** |
-
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-virtual-network.png" alt-text="Screenshot of basics tab for create a virtual network.":::
-
-1. On the **IP Addresses** tab, select the following IP address settings and then select **Review + create**.
-
- | Setting | Value |
- | | |
- | IPv4 address space| Leave as default. |
- | Subnet name | Select **default** and change the subnet name to "Public". |
- | Subnet Address Range | Leave as default. |
-
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-virtual-network-ip-addresses.png" alt-text="Screenshot of IP addresses tab for create a virtual network.":::
-
-1. If the validation checks pass, select **Create**.
-
-1. Wait for the deployment to finish, then select **Go to resource** or move on to the next section.
## Enable a service endpoint
-Service endpoints are enabled per service, per subnet. To create a subnet and enable a service endpoint for the subnet:
+Service endpoints are enabled per service, per subnet.
+
+1. In the search box at the top of the portal page, search for **Virtual network**. Select **Virtual networks** in the search results.
-1. If you're not already on the virtual network resource page, you can search for the newly created virtual network in the box at the top of the portal. Enter *myVirtualNetwork*, and select it from the list.
+1. In **Virtual networks**, select **vnet-1**.
-1. Select **Subnets** under **Settings**, and then select **+ Subnet**, as shown:
+1. In the **Settings** section of **vnet-1**, select **Subnets**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/add-subnet.png" alt-text="Screenshot of adding subnet to an existing virtual network.":::
+1. Select **+ Subnet**.
-1. On the **Add subnet** page, enter or select the following information, and then select **Save**:
+1. On the **Add subnet** page, enter or select the following information:
- | Setting |Value |
+ | Setting | Value |
| | |
- | Name | Private |
- | Subnet address range | Leave as default|
- | Service endpoints | Select **Microsoft.Storage**|
- | Service endpoint policies | Leave default. *0 selected*. |
+ | Name | **subnet-private** |
+ | Subnet address range | Leave the default of **10.0.2.0/24**. |
+ | **SERVICE ENDPOINTS** | |
+ | Services| Select **Microsoft.Storage**|
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/add-subnet-settings.png" alt-text="Screenshot of add a subnet page with service endpoints configured.":::
+1. Select **Save**.
> [!CAUTION] > Before enabling a service endpoint for an existing subnet that has resources in it, see [Change subnet settings](virtual-network-manage-subnet.md#change-subnet-settings). ## Restrict network access for a subnet
-By default, all virtual machine instances in a subnet can communicate with any resources. You can limit communication to and from all resources in a subnet by creating a network security group, and associating it to the subnet:
+By default, all virtual machine instances in a subnet can communicate with any resources. You can limit communication to and from all resources in a subnet by creating a network security group, and associating it to the subnet.
-1. In the search box at the top of the Azure portal, search for **Network security groups**.
+1. In the search box at the top of the portal page, search for **Network security group**. Select **Network security groups** in the search results.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/search-network-security-groups.png" alt-text="Screenshot of searching for network security groups.":::
+1. In **Network security groups**, select **+ Create**.
-1. On the *Network security groups* page, select **+ Create**.
+1. In the **Basics** tab of **Create network security group**, enter or select the following information:
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/network-security-groups-page.png" alt-text="Screenshot of network security groups landing page.":::
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Name | Enter **nsg-storage**. |
+ | Region | Select **East US 2**. |
-1. Enter or select the following information:
+1. Select **Review + create**, then select **Create**.
- |Setting|Value|
- |-|-|
- |Subscription| Select your subscription|
- |Resource group | Select *myResourceGroup* from the list|
- |Name| Enter **myNsgPrivate** |
- |Location| Select **East US** |
+### Create outbound NSG rules
-1. Select **Review + create**, and when the validation check is passed, select **Create**.
+1. In the search box at the top of the portal page, search for **Network security group**. Select **Network security groups** in the search results.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-nsg-page.png" alt-text="Screenshot of create a network security group page.":::
+1. Select **nsg-storage**.
-1. After the network security group is created, select **Go to resource** or search for *myNsgPrivate* at the top of the Azure portal.
+1. Select **Outbound security rules** in **Settings**.
-1. Select **Outbound security rules** under *Settings* and then select **+ Add**.
+1. Select **+ Add**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-outbound-rule.png" alt-text="Screenshot of adding outbound security rule." lightbox="./media/tutorial-restrict-network-access-to-resources/create-outbound-rule-expanded.png":::
+1. Create a rule that allows outbound communication to the Azure Storage service. Enter or select the following information in **Add outbound security rule**:
-1. Create a rule that allows outbound communication to the Azure Storage service. Enter, or select, the following information, and then select **Add**:
-
- |Setting|Value|
- |-|-|
- |Source| Select **Service Tag** |
- |Source service tag | Select **VirtualNetwork** |
- |Source port ranges| * |
- |Destination | Select **Service Tag**|
- |Destination service tag | Select **Storage**|
- |Service | Leave default as *Custom*. |
- |Destination port ranges| Change to *445*. SMB protocol is used to connect to a file share created in a later step. |
- |Protocol|Any|
- |Action|Allow|
- |Priority|100|
- |Name|Rename to **Allow-Storage-All**|
+ | Setting | Value |
+ | - | -- |
+ | Source | Select **Service Tag**. |
+ | Source service tag | Select **VirtualNetwork**. |
+ | Source port ranges | Leave the default of **\***. |
+ | Destination | Select **Service Tag**. |
+ | Destination service tag | Select **Storage**. |
+ | Service | Leave default of **Custom**. |
+ | Destination port ranges | Enter **445**. </br> SMB protocol is used to connect to a file share created in a later step. |
+ | Protocol | Select **Any**. |
+ | Action | Select **Allow**. |
+ | Priority | Leave the default of **100**. |
+ | Name | Enter **allow-storage-all**. |
:::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-outbound-storage-rule.png" alt-text="Screenshot of creating an outbound security to access storage.":::
-1. Create another outbound security rule that denies communication to the internet. This rule overrides a default rule in all network security groups that allows outbound internet communication. Complete steps 6-9 from above using the following values and then select **Add**:
-
- |Setting|Value|
- |-|-|
- |Source| Select **Service Tag** |
- |Source service tag | Select **VirtualNetwork** |
- |Source port ranges| * |
- |Destination | Select **Service Tag**|
- |Destination service tag| Select **Internet**|
- |Service| Leave default as *Custom*. |
- |Destination port ranges| * |
- |Protocol|Any|
- |Action| Change default to **Deny**. |
- |Priority|110|
- |Name|Change to **Deny-Internet-All**|
-
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-outbound-internet-rule.png" alt-text="Screenshot of creating an outbound security to block internet access.":::
-
-1. Create an *inbound security rule* that allows Remote Desktop Protocol (RDP) traffic to the subnet from anywhere. The rule overrides a default security rule that denies all inbound traffic from the internet. Remote desktop connections are allowed to the subnet so that connectivity can be tested in a later step. Select **Inbound security rules** under *Settings* and then select **+ Add**.
-
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-inbound-rule.png" alt-text="Screenshot of adding inbound security rule." lightbox="./media/tutorial-restrict-network-access-to-resources/create-inbound-rule-expanded.png":::
-
-1. Enter or select the follow values and then select **Add**.
+1. Select **+ Add**.
- |Setting|Value|
- |-|-|
- |Source| Any |
- |Source port ranges| * |
- |Destination | Select **Service Tag**|
- |Destination service tag | Select **VirtualNetwork** |
- |Service| Leave default as *Custom*. |
- |Destination port ranges| Change to *3389* |
- |Protocol|Any|
- |Action|Allow|
- |Priority|120|
- |Name|Change to *Allow-RDP-All*|
+1. Create another outbound security rule that denies communication to the internet. This rule overrides a default rule in all network security groups that allows outbound internet communication. Complete the previous steps with the following values in **Add outbound security rule**:
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-inbound-rdp-rule.png" alt-text="Screenshot of creating an allow inbound remote desktop rule.":::
+ | Setting | Value |
+ | - | -- |
+ | Source | Select **Service Tag**. |
+ | Source service tag | Select **VirtualNetwork**. |
+ | Source port ranges | Leave the default of **\***. |
+ | Destination | Select **Service Tag**. |
+ | Destination service tag | Select **Internet**. |
+ | Service | Leave default of **Custom**. |
+ | Destination port ranges | Enter **\***. |
+ | Protocol | Select **Any**. |
+ | Action | Select **Deny**. |
+ | Priority | Leave the default **110**. |
+ | Name | Enter **deny-internet-all**. |
- >[!WARNING]
- > RDP port 3389 is exposed to the Internet. This is only recommended for testing. For *Production environments*, we recommend using a VPN or private connection.
-
-1. Select **Subnets** under *Settings* and then select **+ Associate**.
-
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/associate-subnets-page.png" alt-text="Screenshot of network security groups subnet association page.":::
-
-1. Select **myVirtualNetwork** under *Virtual Network* and then select **Private** under *Subnets*. Select **OK** to associate the network security group to the select subnet.
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-outbound-internet-rule.png" alt-text="Screenshot of creating an outbound security to block internet access.":::
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/associate-private-subnet.png" alt-text="Screenshot of associating a network security group to a private subnet.":::
+1. Select **Add**.
-## Restrict network access to a resource
+### Associate the network security group to a subnet
-The steps required to restrict network access to resources created through Azure services, which are enabled for service endpoints will vary across services. See the documentation for individual services for specific steps for each service. The rest of this tutorial includes steps to restrict network access for an Azure Storage account, as an example.
+1. In the search box at the top of the portal page, search for **Network security group**. Select **Network security groups** in the search results.
-### Create a storage account
+1. Select **nsg-storage**.
-1. Select **+ Create a resource** on the upper, left corner of the Azure portal.
+1. Select **Subnets** in **Settings**.
-1. Enter "Storage account" in the search bar, and select it from the drop-down menu. Then select **Create**.
+1. Select **+ Associate**.
-1. Enter the following information:
+1. In **Associate subnet**, select **vnet-1** in **Virtual network**. Select **subnet-private** in **Subnet**.
- |Setting|Value|
- |-|-|
- |Subscription| Select your subscription|
- |Resource group| Select *myResourceGroup*|
- |Storage account name| Enter a name that is unique across all Azure locations. The name has to between 3-24 characters in length, using only numbers and lower-case letters.|
- |Region| Select **(US) East US** |
- |Performance|Standard|
- |Redundancy| Locally redundant storage (LRS)|
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/associate-nsg-private-subnet.png" alt-text="Screenshot of private subnet associated with network security group.":::
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-storage-account.png" alt-text="Screenshot of create a new storage account.":::
+1. Select **OK**.
-1. Select **Create + review**, and when validation checks have passed, select **Create**.
+## Restrict network access to a resource
- >[!NOTE]
- > The deployment may take a couple of minutes to complete.
+The steps required to restrict network access to resources created through Azure services, which are enabled for service endpoints vary across services. See the documentation for individual services for specific steps for each service. The rest of this tutorial includes steps to restrict network access for an Azure Storage account, as an example.
-1. After the storage account is created, select **Go to resource**.
### Create a file share in the storage account
-1. Select **File shares** under *Data storage*, and then select **+ File share**.
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/file-share-page.png" alt-text="Screenshot of file share page in a storage account.":::
+1. In **Storage accounts**, select the storage account you created in the previous step.
-1. Enter or set the following values for the file share, and then select **Create**:
+1. In **Data storage**, select **File shares**.
- |Setting|Value|
- |-|-|
- |Name| my-file-share|
- |Quota| Select **Set to maximum**. |
- |Tier| Leave as default, *Transaction optimized*. |
+1. Select **+ File share**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-new-file-share.png" alt-text="Screenshot of create new file share settings page.":::
+1. Enter or select the following information in **New file share**:
-1. The new file share should appear on the file share page, if not select the **Refresh** button at the top of the page.
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **file-share**. |
+ | Tier | Leave the default of **Transaction optimized**. |
-### Restrict network access to a subnet
+1. Select **Next: Backup**.
-By default, storage accounts accept network connections from clients in any network, including the internet. You can restrict network access from the internet, and all other subnets in all virtual networks (except the *Private* subnet in the *myVirtualNetwork* virtual network.) To restrict network access to a subnet:
+1. Deselect **Enable backup**.
-1. Select **Networking** under *Settings* for your (uniquely named) storage account.
+1. Select **Review + create**, then select **Create**.
-1. Select *Allow access from **Selected networks*** and then select **+ Add existing virtual network**.
+### Restrict network access to a subnet
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/storage-network-settings.png" alt-text="Screenshot of storage account networking settings page.":::
+By default, storage accounts accept network connections from clients in any network, including the internet. You can restrict network access from the internet, and all other subnets in all virtual networks (except the **subnet-private** subnet in the **vnet-1** virtual network.)
-1. Under **Add networks**, select the following values, and then select **Add**:
+To restrict network access to a subnet:
- |Setting|Value|
- |-|-|
- |Subscription| Select your subscription|
- |Virtual networks| **myVirtualNetwork**|
- |Subnets| **Private**|
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/add-virtual-network.png" alt-text="Screenshot of add virtual network to storage account page.":::
+1. Select your storage account.
-1. Select the **Save** button to save the virtual network configurations.
+1. In **Security + networking**, select **Networking**.
-1. Select **Access keys** under *Security + networking* for the storage account and select **Show keys**. Note the value for key1 to use in a later step when mapping the file share in a VM.
+1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses** in **Public network access**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/storage-access-key.png" alt-text="Screenshot of storage account key and connection strings." lightbox="./media/tutorial-restrict-network-access-to-resources/storage-access-key-expanded.png":::
+1. In **Virtual networks**, select **+ Add existing virtual network**.
-## Create virtual machines
+1. In **Add networks**, enter or select the following information:
-To test network access to a storage account, deploy a VM to each subnet.
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your subscription. |
+ | Virtual networks | Select **vnet-1**. |
+ | Subnets | Select **subnet-private**. |
-### Create the first virtual machine
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/restrict-network-access.png" alt-text="Screenshot of restriction of storage account to the subnet and virtual network created previously.":::
-1. On the Azure portal, select **+ Create a resource**.
+1. Select **Add**.
-1. Select **Compute**, and then **Create** under *Virtual machine*.
+1. Select **Save** to save the virtual network configurations.
-1. On the *Basics* tab, enter or select the following information:
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/restrict-network-access-save.png" alt-text="Screenshot of storage account screen and confirmation of subnet restriction.":::
- |Setting|Value|
- |-|-|
- |Subscription| Select your subscription|
- |Resource group| Select **myResourceGroup**, which was created earlier.|
- |Virtual machine name| Enter *myVmPublic*|
- |Region | (US) East US
- |Availability options| Availability zone|
- |Availability zone | 1 |
- |Image | Select an OS image. For this VM *Windows Server 2019 Datacenter - Gen1* is selected. |
- |Size | Select the VM Instance size you want to use |
- |Username|Enter a user name of your choosing.|
- |Password| Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- |Public inbound ports | Allow selected ports |
- |Select inbound ports | Leave default set to *RDP (3389)* |
+## Create virtual machines
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/create-public-vm-settings.png" alt-text="Screenshot of create public virtual machine settings." lightbox="./media/tutorial-restrict-network-access-to-resources/create-public-vm-settings-expanded.png":::
-
-1. On the **Networking** tab, enter or select the following information:
+To test network access to a storage account, deploy a virtual machine to each subnet.
- |Setting|Value|
- |-|-|
- | Virtual Network | Select **myVirtualNetwork**. |
- | Subnet | Select **Public**. |
- | NIC network security group | Select **Advanced**. The portal automatically creates a network security group for you that allows port 3389. You'll need this port open to connect to the virtual machine in a later step. |
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/virtual-machine-networking.png" alt-text="Screenshot of create public virtual machine network settings." lightbox="./media/tutorial-restrict-network-access-to-resources/virtual-machine-networking-expanded.png":::
+### Create the second virtual machine
-1. Select **Review and create**, then **Create** and wait for the deployment to finish.
+1. Repeat the steps in the previous section to create a second virtual machine. Replace the following values in **Create a virtual machine**:
-1. Select **Go to resource**, or open the **Home > Virtual machines** page, and select the VM you just created *myVmPublic*, which should be started.
+ | Setting | Value |
+ | - | -- |
+ | Virtual machine name | Enter **vm-private**. |
+ | Subnet | Select **subnet-private**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **None**. |
-### Create the second virtual machine
+ > [!WARNING]
+ > Do not continue to the next step until the deployment is completed.
-1. Repeat steps 1-5 to create a second virtual machine. In step 3, name the virtual machine *myVmPrivate*. In step 4, select the **Private** subnet and set *NIC network security group* to **None**.
+## Confirm access to storage account
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/virtual-machine-2-networking.png" alt-text="Screenshot of create private virtual machine network settings." lightbox="./media/tutorial-restrict-network-access-to-resources/virtual-machine-2-networking-expanded.png":::
+The virtual machine you created earlier that is assigned to the **subnet-private** subnet is used to confirm access to the storage account. The virtual machine you created in the previous section that is assigned to the **subnet-1** subnet is used to confirm that access to the storage account is blocked.
-1. Select **Review and create**, then **Create** and wait for the deployment to finish.
+### Get storage account access key
- > [!WARNING]
- > Do not continue to the next step until the deployment is completed.
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
-1. Select **Go to resource**, or open the **Home > Virtual machines** page, and select the VM you just created *myVmPrivate*, which should be started.
+1. In **Storage accounts**, select your storage account.
-## Confirm access to storage account
+1. In **Security + networking**, select **Access keys**.
-1. Once the *myVmPrivate* VM has been created, go to the overview page of the virtual machine. Connect to the VM by selecting the **Connect** button and then select **RDP** from the drop-down.
+1. Copy the value of **key1**. You may need to select the **Show** button to display the key.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/connect-private-vm.png" alt-text="Screenshot of connect button for private virtual machine.":::
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/storage-account-access-key.png" alt-text="Screenshot of storage account access key.":::
-1. Select the **Download RDP File** to download the remote desktop file to your computer.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/download-rdp-file.png" alt-text="Screenshot of download RDP file for private virtual machine.":::
-
-1. Open the downloaded rdp file. When prompted, select **Connect**.
+1. Select **vm-private**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/rdp-connect.png" alt-text="Screenshot of connection screen for private virtual machine.":::
+1. Select **Bastion** in **Operations**.
-1. Enter the user name and password you specified when creating the VM. You may need to select **More choices**, then **Use a different account** to specify the credentials you entered when you created the VM. For the email field, enter the "Administrator account: username" credentials you specified earlier. Select **OK** to sign into the VM.
+1. Enter the username and password you specified when creating the virtual machine. Select **Connect**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/credential-screen.png" alt-text="Screenshot of credential screen for private virtual machine.":::
+1. Open Windows PowerShell. Use the following script to map the Azure file share to drive Z.
- > [!NOTE]
- > You may receive a certificate warning during the sign-in process. If you receive the warning, select **Yes** or **Continue**, to proceed with the connection.
+ * Replace `<storage-account-key>` with the key you copied in the previous step.
-1. Once signed in, open Windows PowerShell. Using the script below, map the Azure file share to drive Z using PowerShell. Replace `<storage-account-key>` and both `<storage-account-name>` variable with values you supplied and made note of earlier in the [Create a storage account](#create-a-storage-account) steps.
+ * Replace `<storage-account-name>` with the name of your storage account. In this example, it's **storage8675**.
```powershell
- $acctKey = ConvertTo-SecureString -String "<storage-account-key>" -AsPlainText -Force
- $credential = New-Object System.Management.Automation.PSCredential -ArgumentList "Azure\<storage-account-name>", $acctKey
- New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\my-file-share" -Credential $credential
+ $key = @{
+ String = "<storage-account-key>"
+ }
+ $acctKey = ConvertTo-SecureString @key -AsPlainText -Force
+
+ $cred = @{
+ ArgumentList = "Azure\<storage-account-name>", $acctKey
+ }
+ $credential = New-Object System.Management.Automation.PSCredential @cred
+
+ $map = @{
+ Name = "Z"
+ PSProvider = "FileSystem"
+ Root = "\\<storage-account-name>.file.core.windows.net\file-share"
+ Credential = $credential
+ }
+ New-PSDrive @map
``` PowerShell returns output similar to the following example output:
- ```powershell
+ ```output
Name Used (GB) Free (GB) Provider Root - -- -
- Z FileSystem \\mystorage007.file.core.windows.net\my-f...
+ Z FileSystem \\storage8675.file.core.windows.net\f...
``` The Azure file share successfully mapped to the Z drive.
-1. Close the remote desktop session to the *myVmPrivate* VM.
+1. Close the Bastion connection to **vm-private**.
## Confirm access is denied to storage account
-### From myVmPublic:
+### From vm-1
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+1. Select **vm-1**.
-1. Enter *myVmPublic* In the **Search resources, services, and docs** box at the top of the portal. When **myVmPublic** appears in the search results, select it.
+1. Select **Bastion** in **Operations**.
-1. Repeat steps 1-5 above in [Confirm access to storage account](#confirm-access-to-storage-account) for the *myVmPublic* VM.
+1. Enter the username and password you specified when creating the virtual machine. Select **Connect**.
- After a short wait, you receive a `New-PSDrive : Access is denied` error. Access is denied because the *myVmPublic* VM is deployed in the *Public* subnet. The *Public* subnet doesn't have a service endpoint enabled for Azure Storage. The storage account only allows network access from the *Private* subnet, not the *Public* subnet.
+1. Repeat the previous command to attempt to map the drive to the file share in the storage account. You may need to copy the storage account access key again for this procedure:
```powershell
- New-PSDrive : Access is denied
- At line:1 char:1
- + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\mystorage007.file ...
- + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- + CategoryInfo : InvalidOperation: (Z:PSDriveInfo) [New-PSDrive], Win32Exception
- + Fu llyQualifiedErrorId : CouldNotMapNetworkDrive,Microsoft.PowerShell.Commands.NewPSDriveCommand
+ $key = @{
+ String = "<storage-account-key>"
+ }
+ $acctKey = ConvertTo-SecureString @key -AsPlainText -Force
+
+ $cred = @{
+ ArgumentList = "Azure\<storage-account-name>", $acctKey
+ }
+ $credential = New-Object System.Management.Automation.PSCredential @cred
+
+ $map = @{
+ Name = "Z"
+ PSProvider = "FileSystem"
+ Root = "\\<storage-account-name>.file.core.windows.net\file-share"
+ Credential = $credential
+ }
+ New-PSDrive @map
+ ```
+
+1. You should receive the following error message:
+ ```output
+ New-PSDrive : Access is denied
+ At line:1 char:5
+ + New-PSDrive @map
+ + ~~~~~~~~~~~~~~~~
+ + CategoryInfo : InvalidOperation: (Z:PSDriveInfo) [New-PSDrive], Win32Exception
+ + FullyQualifiedErrorId : CouldNotMapNetworkDrive,Microsoft.PowerShell.Commands.NewPSDriveCommand
```
-4. Close the remote desktop session to the *myVmPublic* VM.
+4. Close the Bastion connection to **vm-1**.
### From a local machine:
-1. In the Azure portal, go to the uniquely named storage account you created earlier. For example, *mystorage007*.
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
+
+1. In **Storage accounts**, select your storage account.
+
+1. In **Data storage**, select **File shares**.
-1. Select **File shares** under *Data storage*, and then select the *my-file-share* you created earlier.
+1. Select **file-share**.
+
+1. Select **Browse** in the left-hand menu.
1. You should receive the following error message: :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/access-denied-error.png" alt-text="Screenshot of access denied error message."::: >[!NOTE]
-> The access is denied because your computer is not in the *Private* subnet of the *MyVirtualNetwork* virtual network.
+> The access is denied because your computer isn't in the **subnet-private** subnet of the **vnet-1** virtual network.
-## Clean up resources
-When no longer needed, delete the resource group and all resources it contains:
+## Next steps
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+In this tutorial:
-1. Select **Delete resource group**.
+* You enabled a service endpoint for a virtual network subnet.
-1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+* You learned that you can enable service endpoints for resources deployed from multiple Azure services.
-## Next steps
+* You created an Azure Storage account and restricted the network access to the storage account to only resources within a virtual network subnet.
-In this tutorial, you enabled a service endpoint for a virtual network subnet. You learned that you can enable service endpoints for resources deployed from multiple Azure services. You created an Azure Storage account and restricted the network access to the storage account to only resources within a virtual network subnet. To learn more about service endpoints, see [Service endpoints overview](virtual-network-service-endpoints-overview.md) and [Manage subnets](virtual-network-manage-subnet.md).
+To learn more about service endpoints, see [Service endpoints overview](virtual-network-service-endpoints-overview.md) and [Manage subnets](virtual-network-manage-subnet.md).
If you have multiple virtual networks in your account, you may want to establish connectivity between them so that resources can communicate with each other. To learn how to connect virtual networks, advance to the next tutorial.
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
By default, the virtual hub router is automatically configured to deploy with a
When you deploy a new virtual hub, you can specify additional routing infrastructure units to increase the default virtual hub capacity in increments of 1 Gbps and 1000 VMs. This feature gives you the ability to secure upfront capacity without having to wait for the virtual hub to scale out when more throughput is needed. The scale unit on which the virtual hub is created becomes the minimum capacity. Creating a virtual hub without a gateway takes about 5 - 7 minutes while creating a virtual hub and a gateway can take about 30 minutes to complete. You can view routing infrastructure units, router Gbps, and number of VMs supported, in the Azure portal **Virtual hub** pages for **Create virtual hub** and **Edit virtual hub**.
-When increasing the virtual hub capacity, the virtual hub router will continue to support traffic at its current capacity until the scale out is complete. Scaling out the virtual hub router may take up to 25 minutes.
+When increasing the virtual hub capacity, the virtual hub router will continue to support traffic at its current capacity until the scale out is complete. It may take up to 25 minutes for the virtual hub router to scale out to additional routing infrastructure units. It's also important to note the following: currently, regardless of the number of routing infrastructure units deployed, traffic may experience performance degradation if more than 1.5 Gbps is sent in a single TCP flow.
### Configure virtual hub capacity
virtual-wan Monitor Point To Site Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-point-to-site-connections.md
The Azure workbook is now ready to be created. We'll use a mix of built-in funct
## Example queries
-The following section shows example queries.
+The following section shows example log queries to run in your Log Analytics workspace.
-### P2S User successful connections with IP
+### P2S successful connections with IP
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Connection successful" and Message has "Username={UserName}"
+| project splitted=split(Message, "Username=")
+| mv-expand col1=splitted[0], col2=splitted[1], col3=splitted[2]
+| project user=split(col2, " ")
+| mv-expand username=user[0]
+| project ['user']
+```
### EAP (Extensible Authentication Protocol) authentication succeeded
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "EAP authentication succeeded" and Message has "Username={UserName}"
+| project Message, MessageFields = split(Message, " "), Userinfo = split (Message, "Username=")
+| mv-expand MessageId=MessageFields[2], user=split(Userinfo[1]," ")
+| project MessageId, Message, Userinfo[1]
+```
### P2S VPN user info
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Username={UserName}"
+| project Message, MessageFields = split(Message, " "), Userinfo = split (Message, "Username=")
+| mv-expand MessageId=MessageFields[2], Username=Userinfo[1]
+| project MessageId, Message, Username;
+```
### P2S VPN successful connections per user
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Connection successful"
+| project splitted=split(Message, "Username=")
+| mv-expand col1=splitted[0], col2=splitted[1], col3=splitted[2]
+| project user=split(col2, " ")
+| mv-expand username=user[0]
+| project-away ['user']
+| summarize count() by tostring(username)
+| sort by count_ desc
+```
### P2S VPN connections
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog"
+| project TimeGenerated, OperationName, Message, Resource, ResourceGroup
+| sort by TimeGenerated asc
+```
### Successful P2S VPN connections
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Connection successful"
+| project TimeGenerated, Resource, Message
+```
### Failed P2S VPN connections
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Connection failed"
+| project TimeGenerated, Resource, Message
+```
### VPN connection count by P2SDiagnosticLog
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Connection successful" and Message has "Username={UserName}"| count
+```
### IKEDiagnosticLog
+```kusto
+AzureDiagnostics
+| where Category == "IKEDiagnosticLog"
+| project TimeGenerated, OperationName, Message, Resource, ResourceGroup
+| sort by TimeGenerated asc
+```
### Additional IKE diagnostics details
+```kusto
+AzureDiagnostics
+| where Category == "IKEDiagnosticLog"
+| extend Message1=Message
+| parse Message with * "Remote " RemoteIP ":" * "500: Local " LocalIP ":" * "500: " Message2
+| extend Event = iif(Message has "SESSION_ID", Message2, Message1)
+| project TimeGenerated, RemoteIP, LocalIP, Event, Level
+| sort by TimeGenerated asc
+```
### P2S VPN statistics
+```kusto
+AzureDiagnostics
+| where Category == "P2SDiagnosticLog" and Message has "Statistics"
+| project Message, MessageFields = split (Message, " ")
+| mv-expand MessageId=MessageFields[2]
+| project MessageId, Message;
+```
## Next steps
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
description: Learn about Point-to-Site VPN.
Previously updated : 02/13/2023 Last updated : 08/11/2023
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 06/23/2023 Last updated : 08/10/2023
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
description: Learn about VPN devices and IPsec parameters for Site-to-Site cross
Previously updated : 06/14/2023 Last updated : 08/11/2023
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 06/27/2023 Last updated : 08/10/2023 ms.devlang: azurecli
New-AzVirtualNetworkGatewayConnection -Name localtovon -ResourceGroupName testrg
-ConnectionType IPsec -SharedKey 'abc123' ```
+## <a name="connectionmode"></a>Connection modes
++ ## <a name="vpntype"></a>VPN types When you create the virtual network gateway for a VPN gateway configuration, you must specify a *VPN type*. The VPN type that you choose depends on the connection topology that you want to create. For example, a P2S connection requires a RouteBased VPN type. A VPN type can also depend on the hardware that you're using. S2S configurations require a VPN device. Some VPN devices only support a certain VPN type.
Before you create a VPN gateway, you must create a gateway subnet. The gateway s
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
-When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. While it's possible to create a gateway subnet as small as /29 (applicable to the Basic SKU only), all other SKUs require a gateway subnet of size /27 or larger ( /27, /26, /25 etc.). You may want to create a gateway subnet larger than /27 so that the subnet has enough IP addresses to accommodate possible future configurations.
+When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. While it's possible to create a gateway subnet as small as /29 (applicable to the Basic SKU only), all other SKUs require a gateway subnet of size /27 or larger (/27, /26, /25 etc.). You may want to create a gateway subnet larger than /27 so that the subnet has enough IP addresses to accommodate possible future configurations.
The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist.
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure VPN Gateway so that I can securely connect to my Azure virtual networks. Previously updated : 12/20/2022 Last updated : 08/11/2023
When you create a virtual network gateway, you specify the gateway SKU that you
* For more information about gateway SKUs, including supported features, production and dev-test, and configuration steps, see the [VPN Gateway Settings - Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article. * For Legacy SKU information, see [Working with Legacy SKUs](vpn-gateway-about-skus-legacy.md).
-* The Basic SKU does not support IPv6.
+* The Basic SKU doesn't support IPv6.
### <a name="benchmark"></a>Gateway SKUs by tunnel, connection, and throughput
vpn-gateway Vpn Gateway Howto Always On User Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-always-on-user-tunnel.md
description: Learn how to configure an Always On VPN user tunnel for your VPN ga
Previously updated : 04/29/2021 Last updated : 08/11/2023
vpn-gateway Vpn Gateway Howto Aws Bgp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-aws-bgp.md
Previously updated : 08/01/2023 Last updated : 08/10/2023
Repeat these steps to create each of the required connections.
* **IPsec / IKE policy**: Default * **Use policy based traffic selector**: Disable * **DPD timeout in seconds**: leave the default
- * **Connection Mode**: You can select any of the available options (Default, Initiator Only, Responder Only) for **Connection Mode**, then select **Save**.
+ * **Connection Mode**: You can select any of the available options (Default, Initiator Only, Responder Only). For more information, see [VPN Gateway settings - connection modes](vpn-gateway-about-vpn-gateway-settings.md#connectionmode).
+1. Select **Save**.
1. **Review + create** to create the connection. 1. Repeat these steps to create additional connections. 1. Before continuing to the next section, verify that you have a **local network gateway** and **connection** for **each of your four AWS tunnels**.
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
description: Learn how to configure VPN Gateway server settings for P2S configur
Previously updated : 04/10/2023 Last updated : 08/11/2023
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
Title: What is Azure web application firewall on Azure Front Door?
-description: Learn how Azure web application firewall on Azure Front Door service protects your web applications from malicious attacks.
+ Title: What is Azure Web Application Firewall on Azure Front Door?
+description: Learn how Azure Web Application Firewall on Azure Front Door protects your web applications from malicious attacks.
# Azure Web Application Firewall on Azure Front Door
-Azure Web Application Firewall (WAF) on Azure Front Door provides centralized protection for your web applications. WAF defends your web services against common exploits and vulnerabilities. It keeps your service highly available for your users and helps you meet compliance requirements.
+Azure Web Application Firewall on Azure Front Door provides centralized protection for your web applications. A web application firewall (WAF) defends your web services against common exploits and vulnerabilities. It keeps your service highly available for your users and helps you meet compliance requirements.
-WAF on Front Door is a global and centralized solution. It's deployed on Azure network edge locations around the globe. WAF enabled web applications inspect every incoming request delivered by Front Door at the network edge.
+Azure Web Application Firewall on Azure Front Door is a global and centralized solution. It's deployed on Azure network edge locations around the globe. WAF-enabled web applications inspect every incoming request delivered by Azure Front Door at the network edge.
-WAF prevents malicious attacks close to the attack sources, before they enter your virtual network. You get global protection at scale without sacrificing performance. A WAF policy easily links to any Front Door profile in your subscription. New rules can be deployed within minutes, so you can respond quickly to changing threat patterns.
+A WAF prevents malicious attacks close to the attack sources before they enter your virtual network. You get global protection at scale without sacrificing performance. A WAF policy easily links to any Azure Front Door profile in your subscription. New rules can be deployed within minutes, so you can respond quickly to changing threat patterns.
-![Azure web application firewall](../media/overview/wafoverview.png)
+![Screenshot that shows Azure Web Application Firewall.](../media/overview/wafoverview.png)
[!INCLUDE [ddos-waf-recommendation](../../../includes/ddos-waf-recommendation.md)]
-Azure Front Door has [two tiers](../../frontdoor/standard-premium/overview.md): Front Door Standard and Front Door Premium. WAF is natively integrated with Front Door Premium with full capabilities. For Front Door Standard, only [custom rules](#custom-authored-rules) are supported.
+Azure Front Door has [two tiers](../../frontdoor/standard-premium/overview.md):
-## Protection
+- Standard
+- Premium
-* Protect your web applications from web vulnerabilities and attacks without modification to back-end code.
+Azure Web Application Firewall is natively integrated with Azure Front Door Premium with full capabilities. For Azure Front Door Standard, only [custom rules](#custom-authored-rules) are supported.
-* Protect your web applications from malicious bots with the IP Reputation ruleset.
+## Protection
-* Protect your application against DDoS attacks. For more information, see [Application DDoS Protection](../shared/application-ddos-protection.md).
+Azure Web Application Firewall protects your:
+* Web applications from web vulnerabilities and attacks without modifications to back-end code.
+* Web applications from malicious bots with the IP Reputation Rule Set.
+* Applications against DDoS attacks. For more information, see [Application DDoS protection](../shared/application-ddos-protection.md).
## WAF policy and rules
-You can configure a [WAF policy](waf-front-door-create-portal.md) and associate that policy to one or more Front Door front-ends for protection. A WAF policy consists of two types of security rules:
--- custom rules that are authored by the customer.
+You can configure a [WAF policy](waf-front-door-create-portal.md) and associate that policy to one or more Azure Front Door domains for protection. A WAF policy consists of two types of security rules:
-- managed rule sets that are a collection of Azure-managed pre-configured set of rules.
+- Custom rules that the customer created.
+- Managed rule sets that are a collection of Azure-managed preconfigured sets of rules.
-When both are present, custom rules are processed before processing the rules in a managed rule set. A rule is made of a match condition, a priority, and an action. Action types supported are: ALLOW, BLOCK, LOG, and REDIRECT. You can create a fully customized policy that meets your specific application protection requirements by combining managed and custom rules.
+When both are present, custom rules are processed before processing the rules in a managed rule set. A rule is made of a match condition, a priority, and an action. Action types supported are ALLOW, BLOCK, LOG, and REDIRECT. You can create a fully customized policy that meets your specific application protection requirements by combining managed and custom rules.
-Rules within a policy are processed in a priority order. Priority is a unique integer that defines the order of rules to process. Smaller integer value denotes a higher priority and those rules are evaluated before rules with a higher integer value. Once a rule is matched, the corresponding action that was defined in the rule is applied to the request. Once such a match is processed, rules with lower priorities aren't processed further.
+Rules within a policy are processed in a priority order. Priority is a unique integer that defines the order of rules to process. A smaller integer value denotes a higher priority, and those rules are evaluated before rules with a higher integer value. After a rule is matched, the corresponding action that was defined in the rule is applied to the request. After such a match is processed, rules with lower priorities aren't processed further.
-A web application delivered by Front Door can have only one WAF policy associated with it at a time. However, you can have a Front Door configuration without any WAF policies associated with it. If a WAF policy is present, it's replicated to all of our edge locations to ensure consistent security policies across the world.
+A web application delivered by Azure Front Door can have only one WAF policy associated with it at a time. However, you can have an Azure Front Door configuration without any WAF policies associated with it. If a WAF policy is present, it's replicated to all of our edge locations to ensure consistent security policies across the world.
## WAF modes
-WAF policy can be configured to run in the following two modes:
+You can configure a WAF policy to run in two modes:
-- **Detection mode:** When run in detection mode, WAF doesn't take any other actions other than monitors and logs the request and its matched WAF rule to WAF logs. You can turn on logging diagnostics for Front Door. When you use the portal, go to the **Diagnostics** section.--- **Prevention mode:** In prevention mode, WAF takes the specified action if a request matches a rule. If a match is found, no further rules with lower priority are evaluated. Any matched requests are also logged in the WAF logs.
+- **Detection**: When a WAF runs in detection mode, it only monitors and logs the request and its matched WAF rule to WAF logs. It doesn't take any other actions. You can turn on logging diagnostics for Azure Front Door. When you use the portal, go to the **Diagnostics** section.
+- **Prevention**: In prevention mode, a WAF takes the specified action if a request matches a rule. If a match is found, no further rules with lower priority are evaluated. Any matched requests are also logged in the WAF logs.
## WAF actions
-WAF customers can choose to run from one of the actions when a request matches a ruleΓÇÖs conditions:
+WAF customers can choose to run from one of the actions when a request matches a rule's conditions:
-- **Allow:** Request passes through the WAF and is forwarded to back-end. No further lower priority rules can block this request.-- **Block:** The request is blocked and WAF sends a response to the client without forwarding the request to the back-end.-- **Log:** Request is logged in the WAF logs and WAF continues evaluating lower priority rules.-- **Redirect:** WAF redirects the request to the specified URI. The URI specified is a policy level setting. Once configured, all requests that match the **Redirect** action will be sent to that URI.-- **Anomaly score:** This is the default action for Default Rule Set (DRS) 2.0 or later and is not applicable for the Bot Manager ruleset. The total anomaly score is increased incrementally when a rule with this action is matched.
+- **Allow**: The request passes through the WAF and is forwarded to the origin. No further lower priority rules can block this request.
+- **Block**: The request is blocked and the WAF sends a response to the client without forwarding the request to the origin.
+- **Log**: The request is logged in the WAF logs and the WAF continues evaluating lower priority rules.
+- **Redirect**: The WAF redirects the request to the specified URI. The URI specified is a policy-level setting. After configuration, all requests that match the **Redirect** action are sent to that URI.
+- **Anomaly score**: The total anomaly score is increased incrementally when a rule with this action is matched. This default action is for Default Rule Set 2.0 or later. It isn't applicable for the Bot Manager Rule Set.
## WAF rules
-A WAF policy can consist of two types of security rules - custom rules, authored by the customer and managed rulesets, Azure-managed pre-configured set of rules.
-
-### Custom authored rules
+A WAF policy can consist of two types of security rules:
-You can configure custom rules WAF as follows:
+- Custom rules, authored by the customer and managed rule sets
+- Azure-managed preconfigured sets of rules
-- **IP allow list and block list:** You can control access to your web applications based on a list of client IP addresses or IP address ranges. Both IPv4 and IPv6 address types are supported. This list can be configured to either block or allow those requests where the source IP matches an IP in the list.
+### Custom-authored rules
-- **Geographic based access control:** You can control access to your web applications based on the country code that's associated with a clientΓÇÖs IP address.
+To configure custom rules for a WAF, use the following controls:
-- **HTTP parameters-based access control:** You can base rules on string matches in HTTP/HTTPS request parameters. For example, query strings, POST args, Request URI, Request Header, and Request Body.--- **Request method-based access control:** You base rules on the HTTP request method of the request. For example, GET, PUT, or HEAD.--- **Size constraint:** You can base rules on the lengths of specific parts of a request such as query string, Uri, or request body.--- **Rate limiting rules:** A rate control rule limits abnormally high traffic from any client IP address. You may configure a threshold on the number of web requests allowed from a client IP during a one-minute duration. This rule is distinct from an IP list-based allow/block custom rule that either allows all or blocks all request from a client IP. Rate limits can be combined with additional match conditions such as HTTP(S) parameter matches for granular rate control.
+- **IP allow list and block list**: You can control access to your web applications based on a list of client IP addresses or IP address ranges. Both IPv4 and IPv6 address types are supported. This list can be configured to either block or allow those requests where the source IP matches an IP in the list.
+- **Geographic-based access control**: You can control access to your web applications based on the country code that's associated with a client's IP address.
+- **HTTP parameters-based access control**: You can base rules on string matches in HTTP/HTTPS request parameters. Examples include query strings, POST args, Request URI, Request Header, and Request Body.
+- **Request method-based access control**: You base rules on the HTTP request method of the request. Examples include GET, PUT, or HEAD.
+- **Size constraint**: You can base rules on the lengths of specific parts of a request, such as query string, Uri, or Request Body.
+- **Rate limiting rules**: A rate control rule limits abnormally high traffic from any client IP address. You might configure a threshold on the number of web requests allowed from a client IP during a one-minute duration. This rule is distinct from an IP list-based allow/block custom rule that either allows all or blocks all requests from a client IP. Rate limits can be combined with other match conditions, such as HTTP(S) parameter matches for granular rate control.
### Azure-managed rule sets
-Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rulesets are managed by Azure, the rules are updated as needed to protect against new attack signatures. The Azure-managed Default Rule Set includes rules against the following threat categories:
+Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Because Azure manages these rule sets, the rules are updated as needed to protect against new attack signatures. The Azure-managed Default Rule Set includes rules against the following threat categories:
- Cross-site scripting - Java attacks
Azure-managed rule sets provide an easy way to deploy protection against a commo
- SQL injection protection - Protocol attackers
-Custom rules are always applied before rules in the Default Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back-end. No other custom rules or the rules in the Default Rule Set are processed. You can also remove the Default Rule Set from your WAF policies.
+Custom rules are always applied before rules in the Default Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back end. No other custom rules or the rules in the Default Rule Set are processed. You can also remove the Default Rule Set from your WAF policies.
-For more information, see [Web Application Firewall DRS rule groups and rules](waf-front-door-drs.md).
+For more information, see [Web Application Firewall Default Rule Set rule groups and rules](waf-front-door-drs.md).
### Bot protection rule set
-You can enable a managed bot protection rule set to take custom actions on requests from known bot categories.
+You can enable a managed bot protection rule set to take custom actions on requests from known bot categories.
-There are three bot categories supported: Bad, Good, and Unknown. Bot signatures are managed and dynamically updated by the WAF platform.
+Three bot categories are supported:
-Bad bots include bots from malicious IP addresses and bots that have falsified their identities. Malicious IP addresses are sourced from the Microsoft Threat Intelligence feed and updated every hour. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft Threat Intelligence and is used by multiple services including Microsoft Defender for Cloud.
+- **Bad**: Bad bots include bots from malicious IP addresses and bots that have falsified their identities. Malicious IP addresses are sourced from the Microsoft Threat Intelligence feed and updated every hour. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft Threat Intelligence and is used by multiple services, including Microsoft Defender for Cloud.
+- **Good**: Good bots include validated search engines.
+- **Unknown**: Unknown bots include other bot groups that have identified themselves as bots. Examples include market analyzers, feed fetchers, and data collection agents. Unknown bots are classified via published user agents without any other validation.
-Good Bots include validated search engines. Unknown categories include additional bot groups that have identified themselves as bots. For example, market analyzer, feed fetchers and data collection agents.
+The WAF platform manages and dynamically updates bot signatures. You can set custom actions to block, allow, log, or redirect for different types of bots.
-Unknown bots are classified via published user agents without additional validation. You can set custom actions to block, allow, log, or redirect for different types of bots.
+![Screenshot that shows a bot protection rule set.](../media/afds-overview/botprotect2.png)
-![Bot Protection Rule Set](../media/afds-overview/botprotect2.png)
-
-If bot protection is enabled, incoming requests that match bot rules are logged. You may access WAF logs from a storage account, event hub, or log analytics.
+If bot protection is enabled, incoming requests that match bot rules are logged. You can access WAF logs from a storage account, an event hub, or Log Analytics. For more information about how the WAF logs requests, see [Azure Web Application Firewall monitoring and logging](waf-front-door-monitor.md).
## Configuration
-You can configure and deploy all WAF policies using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale using Firewall Manager integration (preview). For more information, see [Use Azure Firewall Manager to manage Web Application Firewall policies (preview)](../shared/manage-policies.md).
+You can configure and deploy all WAF policies by using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale by using Firewall Manager integration (preview). For more information, see [Use Azure Firewall Manager to manage Azure Web Application Firewall policies (preview)](../shared/manage-policies.md).
## Monitoring
-Monitoring for WAF at Front Door is integrated with Azure Monitor to track alerts and easily monitor traffic trends.
+Monitoring for a WAF on Azure Front Door is integrated with Azure Monitor to track alerts and easily monitor traffic trends.
## Next steps -- [Learn about Web Application Firewall on Azure Application Gateway](../ag/ag-overview.md)-- [Learn more about Azure network security](../../networking/security/index.yml)-
+- Learn about [Azure Web Application Firewall on Azure Application Gateway](../ag/ag-overview.md).
+- Learn more about [Azure network security](../../networking/security/index.yml).
web-application-firewall Waf Front Door Configure Custom Response Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-custom-response-code.md
Title: Configure custom responses for Web Application Firewall (WAF) with Azure Front Door
-description: Learn how to configure a custom response code and message when WAF blocks a request.
+ Title: Configure custom responses for Web Application Firewall with Azure Front Door
+description: Learn how to configure a custom response code and message when Azure Web Application Firewall blocks a request.
zone_pivot_groups: front-door-tiers
-# Configure a custom response for Azure Web Application Firewall (WAF)
+# Configure a custom response for Azure Web Application Firewall
-By default, when WAF blocks a request because of a matched rule, it returns a 403 status code with **The request is blocked** message. The default message also includes the tracking reference string that can be used to link to [log entries](./waf-front-door-monitor.md) for the request. You can configure a custom response status code and a custom message with reference string for your use case. This article describes how to configure a custom response page when a request is blocked by WAF.
+This article describes how to configure a custom response page when Azure Web Application Firewall blocks a request.
-## Configure custom response status code and message use portal
+By default, when Azure Web Application Firewall blocks a request because of a matched rule, it returns a 403 status code with the message "The request is blocked." The default message also includes the tracking reference string that's used to link to [log entries](./waf-front-door-monitor.md) for the request. You can configure a custom response status code and a custom message with a reference string for your use case.
-You can configure a custom response status code and body under "Policy settings" from the WAF portal.
+## Configure a custom response status code and message by using the portal
+You can configure a custom response status code and body under **Policy settings** on the Azure Web Application Firewall portal.
-In the above example, we kept the response code as 403, and configured a short "Please contact us" message as shown in the below image:
+In the preceding example, we kept the response code as 403 and configured a short "Please contact us" message, as shown in the following image:
+ ::: zone pivot="front-door-standard-premium"
In the above example, we kept the response code as 403, and configured a short "
::: zone-end
-## Configure custom response status code and message use PowerShell
+## Configure a custom response status code and message by using PowerShell
+
+Follow these steps to configure a custom response status code and message by using PowerShell.
### Set up your PowerShell environment
-Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
+Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
-You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page, to sign in with your Azure credentials, and install the Az PowerShell module.
+You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page to sign in with your Azure credentials. Then install the Az PowerShell module.
### Connect to Azure with an interactive dialog for sign-in
Connect-AzAccount
Install-Module -Name Az ```
-Make sure you have the current version of PowerShellGet installed. Run below command and reopen PowerShell.
+Make sure you have the current version of PowerShellGet installed. Run the following command and reopen PowerShell.
+ ``` Install-Module PowerShellGet -Force -AllowClobber ```
-### Install Az.FrontDoor module
+
+### Install the Az.FrontDoor module
``` Install-Module -Name Az.FrontDoor
Install-Module -Name Az.FrontDoor
### Create a resource group
-In Azure, you allocate related resources to a resource group. Here we create a resource group by using [New-AzResourceGroup](/powershell/module/Az.resources/new-Azresourcegroup).
+In Azure, you allocate related resources to a resource group. Here, we create a resource group by using [New-AzResourceGroup](/powershell/module/Az.resources/new-Azresourcegroup).
```azurepowershell-interactive New-AzResourceGroup -Name myResourceGroupWAF ```
-### Create a new WAF policy with custom response
+### Create a new WAF policy with a custom response
-Below is an example of creating a new WAF policy with custom response status code set to 405, and message to **You are blocked.**, using
-[New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy)
+The following example shows how to create a new web application firewall (WAF) policy with a custom response status code set to 405 and a message of "You are blocked" by using
+[New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy).
```azurepowershell # WAF policy setting
New-AzFrontDoorWafPolicy `
-CustomBlockResponseBody "<html><head><title>You are blocked.</title></head><body></body></html>" ```
-Modify custom response code or response body settings of an existing WAF policy, using [Update-AzFrontDoorFireWallPolicy](/powershell/module/az.frontdoor/Update-AzFrontDoorWafPolicy).
+Modify the custom response code or response body settings of an existing WAF policy by using [Update-AzFrontDoorFireWallPolicy](/powershell/module/az.frontdoor/Update-AzFrontDoorWafPolicy).
```azurepowershell # modify WAF response code
Update-AzFrontDoorFireWallPolicy `
``` ## Next steps-- Learn more about [Web Application Firewall with Azure Front Door](../afds/afds-overview.md)+
+Learn more about [Azure Web Application Firewall on Azure Front Door](../afds/afds-overview.md).
web-application-firewall Waf Front Door Configure Ip Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-ip-restriction.md
Title: Configure IP restriction WAF rule for Azure Front Door
-description: Learn how to configure a Web Application Firewall rule to restrict IP addresses for an existing Azure Front Door endpoint.
+ Title: Configure an IP restriction WAF rule for Azure Front Door
+description: Learn how to configure an Azure Web Application Firewall rule to restrict IP addresses for an existing Azure Front Door endpoint.
-# Configure an IP restriction rule with a Web Application Firewall for Azure Front Door
+# Configure an IP restriction rule with a WAF for Azure Front Door
-This article shows you how to configure IP restriction rules in a Web Application Firewall (WAF) for Azure Front Door by using the Azure portal, Azure CLI, Azure PowerShell, or an Azure Resource Manager template.
+This article shows you how to configure IP restriction rules in a web application firewall (WAF) for Azure Front Door by using the Azure portal, the Azure CLI, Azure PowerShell, or an Azure Resource Manager template.
-An IP addressΓÇôbased access control rule is a custom WAF rule that lets you control access to your web applications. It does this by specifying a list of IP addresses or IP address ranges in Classless Inter-Domain Routing (CIDR) format. There are two type of match variables in IP address match, **RemoteAddr** and **SocketAddr**. RemoteAddr is the original client IP that is usually sent via X-Forwarded-For request header. SocketAddr is the source IP address WAF sees. If your user is behind a proxy, SocketAddr is often the proxy server address.
+An IP addressΓÇôbased access control rule is a custom WAF rule that lets you control access to your web applications. The rule specifies a list of IP addresses or IP address ranges in Classless Inter-Domain Routing (CIDR) format.
-By default, your web application is accessible from the Internet. If you want to limit access to clients from a list of known IP addresses or IP address ranges, you may create an IP matching rule that contains the list of IP addresses as matching values and sets operator to "Not" (negate is true) and the action to **Block**. After an IP restriction rule is applied, requests that originate from addresses outside this allowed list receive a 403 Forbidden response.
+There are two types of match variables in an IP address match: `RemoteAddr` and `SocketAddr`. The `RemoteAddr` variable is the original client IP that's usually sent via the `X-Forwarded-For` request header. The `SocketAddr` variable is the source IP address the WAF sees. If your user is behind a proxy, `SocketAddr` is often the proxy server address.
+
+By default, your web application is accessible from the internet. If you want to limit access to clients from a list of known IP addresses or IP address ranges, you can create an IP matching rule that contains the list of IP addresses as matching values and sets the operator to `Not` (negate is true) and the action to `Block`. After an IP restriction rule is applied, requests that originate from addresses outside this allowed list receive a 403 Forbidden response.
## Configure a WAF policy with the Azure portal
+Follow these steps to configure a WAF policy by using the Azure portal.
+ ### Prerequisites
-Create an Azure Front Door profile by following the instructions described in [Quickstart: Create a Front Door for a highly available global web application](../../frontdoor/quickstart-create-front-door.md).
+Create an Azure Front Door profile by following the instructions described in [Quickstart: Create an Azure Front Door instance for a highly available global web application](../../frontdoor/quickstart-create-front-door.md).
### Create a WAF policy
-1. On the Azure portal, select **Create a resource**, type **Web application firewall** in the **Search services and marketplace** search box, press *Enter*, and then select **Web Application Firewall (WAF)**.
-2. Select **Create**.
-3. On the **Create a WAF policy** page, use the following values to complete the **Basics** tab:
+1. On the Azure portal, select **Create a resource**. Enter **Web application firewall** in the **Search services and marketplace** search box and select Enter. Then select **Web Application Firewall (WAF)**.
+1. Select **Create**.
+1. On the **Create a WAF policy** page, use the following values to complete the **Basics** tab.
|Setting |Value | |||
- |Policy for |Global WAF (Front Door)|
- |Front door tier| Select Premium or Standard to match you Front Door tier|
- |Subscription |Select your subscription|
- |Resource group |Select the resource group where your Front Door is located.|
- |Policy name |Type a name for your policy|
- |Policy state |selected|
- |Policy mode|Prevention|
+ |Policy for |Global WAF (Front Door).|
+ |Front door tier| Select Premium or Standard to match your Azure Front Door tier.|
+ |Subscription |Select your subscription.|
+ |Resource group |Select the resource group where your Azure Front Door instance is located.|
+ |Policy name |Enter a name for your policy.|
+ |Policy state |Selected.|
+ |Policy mode|Prevention.|
-1. Select **Next:Managed rules**.
+1. Select **Next: Managed rules**.
-1. Select **Next: Policy settings**
+1. Select **Next: Policy settings**.
-1. On the **Policy settings** tab, type *You've been blocked!* for the **Block response body**, so you can see that your custom rule is in effect.
-3. Select **Next: Custom rules**.
-4. Select **Add custom rule**.
-5. On the **Add custom rule** page, use the following test values to create a custom rule:
+1. On the **Policy settings** tab, enter **You've been blocked!** for the **Block response body** so that you can see that your custom rule is in effect.
+1. Select **Next: Custom rules**.
+1. Select **Add custom rule**.
+1. On the **Add custom rule** page, use the following test values to create a custom rule.
|Setting |Value | |||
Create an Azure Front Door profile by following the instructions described in [Q
:::image type="content" source="../media/waf-front-door-configure-ip-restriction/custom-rule.png" alt-text="Custom rule"::: Select **Add**.
-6. Select **Next: Association**.
-7. Select **Associate a Front door profile**.
-8. For **Frontend profile**, select your frontend profile.
+1. Select **Next: Association**.
+1. Select **Associate a Front door profile**.
+1. For **Frontend profile**, select your front-end profile.
1. For **Domain**, select the domain. 1. Select **Add**. 1. Select **Review + create**.
Create an Azure Front Door profile by following the instructions described in [Q
### Test your WAF policy
-1. After your WAF policy deployment completes, browse to your Front Door frontend host name.
-2. You should see your custom block message.
+1. After your WAF policy deployment completes, browse to your Azure Front Door front-end host name.
+1. You should see your custom block message.
:::image type="content" source="../media/waf-front-door-configure-ip-restriction/waf-rule-test.png" alt-text="WAF rule test"::: > [!NOTE]
- > A private IP address was intentionally used in the custom rule to guarantee the rule would trigger. In an actual deployment, create *allow* and *deny* rules using IP addresses for your particular situation.
+ > A private IP address was intentionally used in the custom rule to guarantee the rule would trigger. In an actual deployment, create *allow* and *deny* rules by using IP addresses for your particular situation.
## Configure a WAF policy with the Azure CLI
+Follow these steps to configure a WAF policy by using the Azure CLI.
+ ### Prerequisites Before you begin to configure an IP restriction policy, set up your CLI environment and create an Azure Front Door profile. #### Set up the Azure CLI environment
-1. Install the [Azure CLI](/cli/azure/install-azure-cli), or use Azure Cloud Shell. Azure Cloud Shell is a free Bash shell that you can run directly within the Azure portal. It has the Azure CLI preinstalled and configured to use with your account. Select the **Try it** button in the CLI commands that follow, and then sign in to your Azure account in the Cloud Shell session that opens. After the session starts, enter `az extension add --name front-door` to add the Azure Front Door extension.
- 2. If you're using the CLI locally in Bash, sign in to Azure by using `az login`.
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) or use Azure Cloud Shell. Azure Cloud Shell is a free Bash shell that you can run directly within the Azure portal. It has the Azure CLI preinstalled and configured to use with your account. Select the **Try it** button in the CLI commands that follow. Then sign in to your Azure account in the Cloud Shell session that opens. After the session starts, enter `az extension add --name front-door` to add the Azure Front Door extension.
+ 1. If you're using the CLI locally in Bash, sign in to Azure by using `az login`.
#### Create an Azure Front Door profile
-Create an Azure Front Door profile by following the instructions described in [Quickstart: Create a Front Door for a highly available global web application](../../frontdoor/quickstart-create-front-door.md).
+Create an Azure Front Door profile by following the instructions described in [Quickstart: Create an Azure Front Door instance for a highly available global web application](../../frontdoor/quickstart-create-front-door.md).
### Create a WAF policy
az network front-door waf-policy create \
``` ### Add a custom IP access control rule
-Use the [az network front-door waf-policy custom-rule create](/cli/azure/network/front-door/waf-policy/rule#az-network-front-door-waf-policy-rule-create) command to add a custom IP access control rule for the WAF policy you just created.
+Use the [az network front-door waf-policy custom-rule create](/cli/azure/network/front-door/waf-policy/rule#az-network-front-door-waf-policy-rule-create) command to add a custom IP access control rule for the WAF policy you created.
In the following examples: - Replace *IPAllowPolicyExampleCLI* with your unique policy created earlier. - Replace *ip-address-range-1*, *ip-address-range-2* with your own range. First, create an IP allow rule for the policy created from the previous step.+ > [!NOTE]
-> **--defer** is required because a rule must have a match condition to be added in the next step.
+> `--defer` is required because a rule must have a match condition to be added in the next step.
```azurecli az network front-door waf-policy rule create \
az network front-door waf-policy rule create \
--resource-group <resource-group-name> \ --policy-name IPAllowPolicyExampleCLI --defer ```
-Next, add match condition to the rule:
+
+Next, add a match condition to the rule:
```azurecli az network front-door waf-policy rule match-condition add \
Set the Azure Front Door *WebApplicationFirewallPolicyLink* ID to the policy ID
--name <frontdoor-name> \ --resource-group <resource-group-name> ```
-In this example, the WAF policy is applied to **FrontendEndpoints[0]**. You can link the WAF policy to any of your front ends.
+
+In this example, the WAF policy is applied to `FrontendEndpoints[0]`. You can link the WAF policy to any of your front ends.
+ > [!Note]
-> You need to set the **WebApplicationFirewallPolicyLink** property only once to link a WAF policy to an Azure Front Door front end. Subsequent policy updates are automatically applied to the front end.
+> You need to set the `WebApplicationFirewallPolicyLink` property only once to link a WAF policy to an Azure Front Door front end. Subsequent policy updates are automatically applied to the front end.
## Configure a WAF policy with Azure PowerShell
+Follow these steps to configure a WAF policy by using Azure PowerShell.
+ ### Prerequisites Before you begin to configure an IP restriction policy, set up your PowerShell environment and create an Azure Front Door profile. #### Set up your PowerShell environment Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing Azure resources.
-You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page to sign in to PowerShell by using your Azure credentials, and then install the Az module.
+You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page to sign in to PowerShell by using your Azure credentials and then install the Az module.
-1. Connect to Azure by using the following command, and then use an interactive dialog to sign in.
+1. Connect to Azure by using the following command and then use an interactive dialog to sign in.
``` Connect-AzAccount ```
- 2. Before you install an Azure Front Door module, make sure you have the current version of the PowerShellGet module installed. Run the following command, and then reopen PowerShell.
+ 1. Before you install an Azure Front Door module, make sure you have the current version of the PowerShellGet module installed. Run the following command and then reopen PowerShell.
``` Install-Module PowerShellGet -Force -AllowClobber ```
-3. Install the Az.FrontDoor module by using the following command.
+1. Install the Az.FrontDoor module by using the following command:
``` Install-Module -Name Az.FrontDoor
Create an Azure Front Door profile by following the instructions described in [Q
### Define an IP match condition Use the [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject) command to define an IP match condition. In the following example, replace *ip-address-range-1*, *ip-address-range-2* with your own range.+ ```powershell $IPMatchCondition = New-AzFrontDoorWafMatchConditionObject ` -MatchVariable SocketAddr `
$IPMatchCondition = New-AzFrontDoorWafMatchConditionObject `
### Create a custom IP allow rule
-Use the [New-AzFrontDoorWafCustomRuleObject](/powershell/module/Az.FrontDoor/New-azfrontdoorwafcustomruleobject) command to define an action and set a priority. In the following example, requests not from client IPs that match the list will be blocked.
+Use the [New-AzFrontDoorWafCustomRuleObject](/powershell/module/Az.FrontDoor/New-azfrontdoorwafcustomruleobject) command to define an action and set a priority. In the following example, requests not from client IPs that match the list are blocked.
```azurepowershell $IPAllowRule = New-AzFrontDoorWafCustomRuleObject `
Find the name of the resource group that contains the Azure Front Door profile b
-Mode Prevention ` -EnabledState Enabled ```+ > [!TIP] > For an existing WAF policy, you can use [Update-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/update-azfrontdoorwafpolicy) to update the policy. ### Link a WAF policy to an Azure Front Door front-end host
-Link a WAF policy object to an existing front-end host and update Azure Front Door properties. First, retrieve the Azure Front Door object by using [Get-AzFrontDoor](/powershell/module/Az.FrontDoor/Get-AzFrontDoor). Next, set the **WebApplicationFirewallPolicyLink** property to the resource ID of *$IPAllowPolicyExamplePS*, created in the previous step, by using the [Set-AzFrontDoor](/powershell/module/Az.FrontDoor/Set-AzFrontDoor) command.
+Link a WAF policy object to an existing front-end host and update Azure Front Door properties. First, retrieve the Azure Front Door object by using [Get-AzFrontDoor](/powershell/module/Az.FrontDoor/Get-AzFrontDoor). Next, set the `WebApplicationFirewallPolicyLink` property to the resource ID of `$IPAllowPolicyExamplePS`, created in the previous step, by using the [Set-AzFrontDoor](/powershell/module/Az.FrontDoor/Set-AzFrontDoor) command.
```azurepowershell $FrontDoorObjectExample = Get-AzFrontDoor `
Link a WAF policy object to an existing front-end host and update Azure Front Do
``` > [!NOTE]
-> In this example, the WAF policy is applied to **FrontendEndpoints[0]**. You can link a WAF policy to any of your front ends. You need to set the **WebApplicationFirewallPolicyLink** property only once to link a WAF policy to an Azure Front Door front end. Subsequent policy updates are automatically applied to the front end.
-
+> In this example, the WAF policy is applied to `FrontendEndpoints[0]`. You can link a WAF policy to any of your front ends. You need to set the `WebApplicationFirewallPolicyLink` property only once to link a WAF policy to an Azure Front Door front end. Subsequent policy updates are automatically applied to the front end.
## Configure a WAF policy with a Resource Manager template To view the template that creates an Azure Front Door policy and a WAF policy with custom IP restriction rules, go to [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-clientip). - ## Next steps -- Learn how to [create an Azure Front Door profile](../../frontdoor/quickstart-create-front-door.md).
+Learn how to [create an Azure Front Door profile](../../frontdoor/quickstart-create-front-door.md).
web-application-firewall Waf Front Door Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-create-portal.md
Title: 'Tutorial: Create WAF policy for Azure Front Door - Azure portal'
-description: In this tutorial, you learn how to create a Web Application Firewall (WAF) policy by using the Azure portal.
+ Title: 'Tutorial: Create a WAF policy for Azure Front Door - Azure portal'
+description: In this tutorial, you learn how to create a web application firewall (WAF) policy by using the Azure portal.
-# Tutorial: Create a Web Application Firewall policy on Azure Front Door using the Azure portal
+# Tutorial: Create a WAF policy on Azure Front Door by using the Azure portal
-This tutorial shows you how to create a basic Azure Web Application Firewall (WAF) policy and apply it to a front-end host at Azure Front Door.
+This tutorial shows you how to create a basic web application firewall (WAF) policy and apply it to a front-end host at Azure Front Door.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a WAF policy
-> * Associate it with a frontend host
-> * Configure WAF rules
+> * Create a WAF policy.
+> * Associate it with a front-end host.
+> * Configure WAF rules.
## Prerequisites
-Create a [Front Door](../../frontdoor/quickstart-create-front-door.md) or a [Front Door Standard/Premium](../../frontdoor/standard-premium/create-front-door-portal.md) profile.
+Create an Azure [Front Door](../../frontdoor/quickstart-create-front-door.md) instance or an [Azure Front Door Standard or Premium](../../frontdoor/standard-premium/create-front-door-portal.md) profile.
-## Create a Web Application Firewall policy
+## Create a WAF policy
-First, create a basic WAF policy with managed Default Rule Set (DRS) by using the portal.
+First, create a basic WAF policy with the managed Default Rule Set (DRS) by using the Azure portal.
-1. On the top left-hand side of the screen, select **Create a resource** > search for **WAF** > select **Web Application Firewall (WAF)** > select **Create**.
+1. In the upper-left side of the screen, select **Create a resource**. Search for **WAF**, select **Web Application Firewall (WAF)**, and select **Create**.
-1. In the **Basics** tab of the **Create a WAF policy** page, enter or select the following information, accept the defaults for the remaining settings:
+1. On the **Basics** tab of the **Create a WAF policy** page, enter or select the following information and accept the defaults for the remaining settings.
| Setting | Value | | | | | Policy for | Select **Global WAF (Front Door)**. |
- | Front Door SKU | Select between **Classic**, **Standard** and **Premium** SKUs. |
+ | Front door tier | Select between **Classic**, **Standard**, and **Premium** tiers. |
| Subscription | Select your Azure subscription.|
- | Resource group | Select your Front Door resource group name.|
+ | Resource group | Select your Azure Front Door resource group name.|
| Policy name | Enter a unique name for your WAF policy.|
- | Policy state | Set as **Enabled**. |
+ | Policy state | Set as **Enabled**. |
- :::image type="content" source="../media/waf-front-door-create-portal/basic.png" alt-text="Screenshot of the Create a W A F policy page, with a Review + create button and list boxes for the subscription, resource group, and policy name.":::
+ :::image type="content" source="../media/waf-front-door-create-portal/basic.png" alt-text="Screenshot that shows the Create a W A F policy page, with the Review + create button and list boxes for the subscription, resource group, and policy name.":::
-1. Select **Association** tab, and then select **+ Associate a Front door profile**, enter the following settings, and then select **Add**:
+1. On the **Association** tab, select **Associate a Front door profile**, enter the following settings, and select **Add**.
| Setting | Value | | | |
- | Front Door profile | Select your Front Door profile name. |
- | Domains | Select the domains you want to associate the WAF policy to, then select **Add**. |
+ | Front door profile | Select your Azure Front Door profile name. |
+ | Domains | Select the domains you want to associate the WAF policy to and then select **Add**. |
- :::image type="content" source="../media/waf-front-door-create-portal/associate-profile.png" alt-text="Screenshot of the associate a Front Door profile page.":::
+ :::image type="content" source="../media/waf-front-door-create-portal/associate-profile.png" alt-text="Screenshot that shows the Associate a Front door profile page.":::
> [!NOTE]
- > If the domain is associated to a WAF policy, it is shown as grayed out. You must first remove the domain from the associated policy, and then re-associate the domain to a new WAF policy.
+ > If the domain is associated to a WAF policy, it's shown as grayed out. You must first remove the domain from the associated policy and then re-associate the domain to a new WAF policy.
-1. Select **Review + create**, then select **Create**.
+1. Select **Review + create** > **Create**.
-## Configure Web Application Firewall rules (optional)
+## Configure WAF rules (optional)
+
+Follow these steps to configure WAF rules.
### Change mode
-When you create a WAF policy, by default, WAF policy is in **Detection** mode. In **Detection** mode, WAF doesn't block any requests, instead, requests matching the WAF rules are logged at WAF logs.
-To see WAF in action, you can change the mode settings from **Detection** to **Prevention**. In **Prevention** mode, requests that match defined rules are blocked and logged at WAF logs.
+When you create a WAF policy, by default, the WAF policy is in **Detection** mode. In **Detection** mode, the WAF doesn't block any requests. Instead, requests matching the WAF rules are logged at WAF logs.
+To see the WAF in action, you can change the mode settings from **Detection** to **Prevention**. In **Prevention** mode, requests that match defined rules are blocked and logged at WAF logs.
- :::image type="content" source="../media/waf-front-door-create-portal/policy.png" alt-text="Screenshot of the Overview page of Front Door WAF policy that shows how to switch to prevention mode.":::
+ :::image type="content" source="../media/waf-front-door-create-portal/policy.png" alt-text="Screenshot that shows the Overview page of the Azure Front Door WAF policy that shows how to switch to Prevention mode.":::
### Custom rules
-You can create a custom rule by selecting **Add custom rule** under the **Custom rules** section. This launches the custom rule configuration page.
+To create a custom rule, under the **Custom rules** section, select **Add custom rule** to open the custom rule configuration page.
-Below is an example of configuring a custom rule to block a request if the query string contains **blockme**.
+The following example shows how to configure a custom rule to block a request if the query string contains **blockme**.
-### Default Rule Set (DRS)
+### Default Rule Set
-Azure-managed Default Rule Set (DRS) is enabled by default for Premium and Classic tiers of Front Door. Current default rule set for Premium Front Door is Microsoft_DefaultRuleSet_2.0. Microsoft_DefaultRuleSet_1.1 is the current default rule set for Classic Front Door. From **Managed rules** page, select **Assign** to assign a different DRS.
+The Azure-managed Default Rule Set is enabled by default for the Premium and Classic tiers of Azure Front Door. The current DRS for the Premium tier of Azure Front Door is Microsoft_DefaultRuleSet_2.0. Microsoft_DefaultRuleSet_1.1 is the current DRS for the Classic tier of Azure Front Door. On the **Managed rules** page, select **Assign** to assign a different DRS.
-To disable an individual rule, select the **check box** in front of the rule number, and select **Disable** at the top of the page. To change actions types for individual rules within the rule set, select the check box in front of the rule number, and then select the **Change action** at the top of the page.
+To disable an individual rule, select the checkbox in front of the rule number and select **Disable** at the top of the page. To change action types for individual rules within the rule set, select the checkbox in front of the rule number and select **Change action** at the top of the page.
> [!NOTE]
-> Managed rules are only supported in Front Door Premium tier and Front Door Classic tier policies.
+> Managed rules are only supported in the Azure Front Door Premium tier and Azure Front Door Classic tier policies.
## Clean up resources
When no longer needed, delete the resource group and all related resources.
## Next steps > [!div class="nextstepaction"]
-> [Learn more about Azure Front Door](../../frontdoor/front-door-overview.md)
-> [Learn more about Azure Front Door tiers](../../frontdoor/standard-premium/tier-comparison.md)
+> - [Learn more about Azure Front Door](../../frontdoor/front-door-overview.md)
+> - [Learn more about Azure Front Door tiers](../../frontdoor/standard-premium/tier-comparison.md)
web-application-firewall Waf Front Door Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules-powershell.md
Title: Configure WAF custom rules & Default Rule Set for Azure Front Door
-description: Learn how to configure a WAF policy consist of both custom and managed rules for an existing Front Door endpoint.
+ Title: Configure WAF custom rules and the Default Rule Set for Azure Front Door
+description: Learn how to configure a web application firewall (WAF) policy that consists of custom and managed rules for an existing Azure Front Door endpoint.
-# Configure a Web Application Firewall policy using Azure PowerShell
+# Configure a WAF policy by using Azure PowerShell
-Azure Web Application Firewall (WAF) policy defines inspections required when a request arrives at Front Door.
-This article shows how to configure a WAF policy that consists of some custom rules and with Azure-managed Default Rule Set enabled.
+A web application firewall (WAF) policy defines the inspections that are required when a request arrives at Azure Front Door.
+
+This article shows how to configure a WAF policy that consists of some custom rules and has the Azure-managed Default Rule Set enabled.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-Before you begin to set up a rate limit policy, set up your PowerShell environment and create a Front Door profile.
+Before you begin to set up a rate limit policy, set up your PowerShell environment and create an Azure Front Door profile.
### Set up your PowerShell environment
-Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
+Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
-You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page, to sign in with your Azure credentials, and install Az PowerShell module.
+You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page to sign in with your Azure credentials. Then install the Az PowerShell module.
#### Sign in to Azure
You can install [Azure PowerShell](/powershell/azure/) on your local machine and
Connect-AzAccount ```
-Before install Front Door module, make sure you have the current version of PowerShellGet installed. Run below command and reopen PowerShell.
+Before you install the Azure Front Door module, make sure you have the current version of PowerShellGet installed. Run the following command and reopen PowerShell.
``` Install-Module PowerShellGet -Force -AllowClobber ```
-#### Install Az.FrontDoor module
+#### Install the Az.FrontDoor module
``` Install-Module -Name Az.FrontDoor ```
-### Create a Front Door profile
-Create a Front Door profile by following the instructions described in [Quickstart: Create a Front Door profile](../../frontdoor/quickstart-create-front-door.md)
+### Create an Azure Front Door profile
+
+Create an Azure Front Door profile by following the instructions described in [Quickstart: Create an Azure Front Door profile](../../frontdoor/quickstart-create-front-door.md).
-## Custom rule based on http parameters
+## Custom rule based on HTTP parameters
-The following example shows how to configure a custom rule with two match conditions using [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject). Requests are from a specified site as defined by referrer, and query string doesn't contain "password".
+The following example shows how to configure a custom rule with two match conditions by using [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject). Requests are from a specified site as defined by referrer, and the query string doesn't contain `password`.
```powershell-interactive $referer = New-AzFrontDoorWafMatchConditionObject -MatchVariable RequestHeader -OperatorProperty Equal -Selector "Referer" -MatchValue "www.mytrustedsites.com/referpage.html"
$password = New-AzFrontDoorWafMatchConditionObject -MatchVariable QueryString -O
$AllowFromTrustedSites = New-AzFrontDoorWafCustomRuleObject -Name "AllowFromTrustedSites" -RuleType MatchRule -MatchCondition $referer,$password -Action Allow -Priority 1 ```
-## Custom rule based on http request method
+## Custom rule based on an HTTP request method
-Create a rule blocking "PUT" method using [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject) as follows:
+Create a rule blocking a PUT method by using [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject).
```powershell-interactive $put = New-AzFrontDoorWafMatchConditionObject -MatchVariable RequestMethod -OperatorProperty Equal -MatchValue PUT
$BlockPUT = New-AzFrontDoorWafCustomRuleObject -Name "BlockPUT" -RuleType MatchR
## Create a custom rule based on size constraint
-The following example creates a rule blocking requests with Url that is longer than 100 characters using Azure PowerShell:
+The following example creates a rule blocking requests with a URL that's longer than 100 characters by using Azure PowerShell.
+ ```powershell-interactive $url = New-AzFrontDoorWafMatchConditionObject -MatchVariable RequestUri -OperatorProperty GreaterThanOrEqual -MatchValue 100 $URLOver100 = New-AzFrontDoorWafCustomRuleObject -Name "URLOver100" -RuleType MatchRule -MatchCondition $url -Action Block -Priority 3 ```
-## Add managed Default Rule Set
-The following example creates a managed Default Rule Set using Azure PowerShell:
+## Add a managed Default Rule Set
+
+The following example creates a managed Default Rule Set by using Azure PowerShell.
+ ```powershell-interactive $managedRules = New-AzFrontDoorWafManagedRuleObject -Type DefaultRuleSet -Version 1.0 ```+ ## Configure a security policy
-Find the name of the resource group that contains the Front Door profile using `Get-AzResourceGroup`. Next, configure a security policy with created rules in the previous steps using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) in the specified resource group that contains the Front Door profile.
+Find the name of the resource group that contains the Azure Front Door profile by using `Get-AzResourceGroup`. Next, configure a security policy with created rules in the previous steps by using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) in the specified resource group that contains the Azure Front Door profile.
```powershell-interactive $myWAFPolicy=New-AzFrontDoorWafPolicy -Name $policyName -ResourceGroupName $resourceGroupName -Customrule $AllowFromTrustedSites,$BlockPUT,$URLOver100 -ManagedRule $managedRules -EnabledState Enabled -Mode Prevention ```
-## Link policy to a Front Door front-end host
+## Link policy to an Azure Front Door front-end host
-Link the security policy object to an existing Front Door front-end host and update Front Door properties. First, retrieve the Front Door object using [Get-AzFrontDoor](/powershell/module/Az.FrontDoor/Get-AzFrontDoor).
-Next, set the front-end *WebApplicationFirewallPolicyLink* property to the *resourceId* of the "$myWAFPolicy$" created in the previous step using [Set-AzFrontDoor](/powershell/module/Az.FrontDoor/Set-AzFrontDoor).
+Link the security policy object to an existing Azure Front Door front-end host and update Azure Front Door properties. First, retrieve the Azure Front Door object by using [Get-AzFrontDoor](/powershell/module/Az.FrontDoor/Get-AzFrontDoor).
+Next, set the front-end `WebApplicationFirewallPolicyLink` property to the `resourceId` of the `$myWAFPolicy$` created in the previous step by using [Set-AzFrontDoor](/powershell/module/Az.FrontDoor/Set-AzFrontDoor).
-The below example uses the Resource Group name *myResourceGroupFD1* with the assumption that you've created the Front Door profile using instructions provided in the [Quickstart: Create a Front Door](../../frontdoor/quickstart-create-front-door.md) article. Also, in the below example, replace $frontDoorName with the name of your Front Door profile.
+The following example uses the resource group name `myResourceGroupFD1` with the assumption that you've created the Azure Front Door profile by using instructions provided in [Quickstart: Create an Azure Front Door](../../frontdoor/quickstart-create-front-door.md). Also, in the following example, replace `$frontDoorName` with the name of your Azure Front Door profile.
```powershell-interactive $FrontDoorObjectExample = Get-AzFrontDoor `
The below example uses the Resource Group name *myResourceGroupFD1* with the ass
``` > [!NOTE]
-> You only need to set *WebApplicationFirewallPolicyLink* property once to link a security policy to a Front Door front-end. Subsequent policy updates are automatically applied to the front-end.
+> You only need to set the `WebApplicationFirewallPolicyLink` property once to link a security policy to an Azure Front Door front end. Subsequent policy updates are automatically applied to the front end.
## Next steps -- Learn more about [Front Door](../../frontdoor/front-door-overview.md) -- Learn more about [WAF with Front Door](afds-overview.md)
+- Learn more about [Azure Front Door](../../frontdoor/front-door-overview.md).
+- Learn more about [Azure Web Application Firewall on Azure Front Door](afds-overview.md).
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Title: Web application firewall custom rule for Azure Front Door
-description: Learn how to use Web Application Firewall (WAF) custom rules protecting your web applications from malicious attacks.
+description: Learn how to use web application firewall (WAF) custom rules to protect your web applications from malicious attacks.
Last updated 11/01/2022
-# Custom rules for Web Application Firewall with Azure Front Door
+# Custom rules for Azure Web Application Firewall on Azure Front Door
-Azure Web Application Firewall (WAF) with Front Door allows you to control access to your web applications based on the conditions you define. A custom WAF rule consists of a priority number, rule type, match conditions, and an action. There are two types of custom rules: match rules and rate limit rules. A match rule controls access based on a set of matching conditions while a rate limit rule controls access based on matching conditions and the rates of incoming requests. You may disable a custom rule to prevent it from being evaluated, but still keep the configuration.
+Azure Web Application Firewall on Azure Front Door allows you to control access to your web applications based on the conditions you define. A custom web application firewall (WAF) rule consists of a priority number, rule type, match conditions, and an action.
-For more information on rate limiting, see [What is rate limiting for Azure Front Door Service?](waf-front-door-rate-limit.md).
+There are two types of custom rules: match rules and rate limit rules. A match rule controls access based on a set of matching conditions. A rate limit rule controls access based on matching conditions and the rates of incoming requests. You can disable a custom rule to prevent it from being evaluated but still keep the configuration.
+
+For more information on rate limiting, see [What is rate limiting for Azure Front Door?](waf-front-door-rate-limit.md).
## Priority, action types, and match conditions
-You can control access with a custom WAf rule that defines a priority number, a rule type, an array of match conditions, and an action.
+You can control access with a custom WAF rule that defines a priority number, a rule type, an array of match conditions, and an action.
- **Priority**
- A unique integer that describes the order of evaluation of WAF rules. Rules with lower priority values are evaluated before rules with higher values. The rule evaluation stops on any rule action except for *Log*. Priority numbers must be unique among all custom rules.
+ A unique integer that describes the order of evaluation of WAF rules. Rules with lower-priority values are evaluated before rules with higher values. The rule evaluation stops on any rule action except for *Log*. Priority numbers must be unique among all custom rules.
- **Action**
- Defines how to route a request if a WAF rule is matched. You can choose one of the below actions to apply when a request matches a custom rule.
+ Defines how to route a request if a WAF rule is matched. You can choose one of the following actions to apply when a request matches a custom rule.
- - *Allow* - WAF allows the request to process, logs an entry in WAF logs, and exits.
- - *Block* - Request is blocked. WAF sends response to client without forwarding the request further. WAF logs an entry in WAF logs and exits.
- - *Log* - WAF logs an entry in WAF logs, and continues to evaluate the next rule in the priority order.
- - *Redirect* - WAF redirects the request to a specified URI, logs an entry in WAF logs, and exits.
+ - **Allow**: The WAF allows the request to process, logs an entry in WAF logs, and exits.
+ - **Block**: Request is blocked. The WAF sends a response to a client without forwarding the request further. The WAF logs an entry in WAF logs and exits.
+ - **Log**: The WAF logs an entry in WAF logs and continues to evaluate the next rule in the priority order.
+ - **Redirect**: The WAF redirects the request to a specified URI, logs an entry in WAF logs, and exits.
-- **Match condition**
+- **Match condition**
- Defines a match variable, an operator, and match value. Each rule may contain multiple match conditions. A match condition may be based on geo location, client IP addresses (CIDR), size, or string match. String match can be against a list of match variables.
+ Defines a match variable, an operator, and a match value. Each rule can contain multiple match conditions. A match condition might be based on geo-location, client IP addresses (CIDR), size, or string match. String match can be against a list of match variables.
- **Match variable** - RequestMethod - QueryString
You can control access with a custom WAf rule that defines a priority number, a
- RequestBody - Cookies - **Operator**
- - Any - is often used to define default action if no rules are matched. Any is a match all operator.
+ - Any: Often used to define default action if no rules are matched. Any is a match all operator.
- Equal - Contains
- - LessThan: size constraint
- - GreaterThan: size constraint
- - LessThanOrEqual: size constraint
- - GreaterThanOrEqual: size constraint
+ - LessThan: Size constraint
+ - GreaterThan: Size constraint
+ - LessThanOrEqual: Size constraint
+ - GreaterThanOrEqual: Size constraint
- BeginsWith - EndsWith - Regex
You can control access with a custom WAf rule that defines a priority number, a
- **Negate [optional]**
- You can set the *negate* condition to true if the result of a condition should be negated.
+ You can set the `negate` condition to *true* if the result of a condition should be negated.
- **Transform [optional]** A list of strings with names of transformations to do before the match is attempted. These can be the following transformations:
- - Uppercase
+ - Uppercase
- Lowercase - Trim - RemoveNulls
You can control access with a custom WAf rule that defines a priority number, a
## Examples
+Consider the following examples.
+ ### Match based on HTTP request parameters Suppose you need to configure a custom rule to allow requests that match the following two conditions: - The `Referer` header's value is equal to a known value.-- The query string doesn't contain the word "password".
+- The query string doesn't contain the word `password`.
Here's an example JSON description of the custom rule:
Here's an example JSON description of the custom rule:
### Size constraint
-Front Door's WAF enables you to build custom rules that apply a length or size constraint on a part of an incoming request. This size constraint is measured in bytes.
+An Azure Front Door WAF enables you to build custom rules that apply a length or size constraint on a part of an incoming request. This size constraint is measured in bytes.
Suppose you need to block requests where the URL is longer than 100 characters.
Here's an example JSON description of the custom rule:
``` ## Next steps-- [Configure a Web Application Firewall policy using Azure PowerShell](waf-front-door-custom-rules-powershell.md) -- Learn about [web Application Firewall with Front Door](afds-overview.md)-- Learn how to [create a Front Door](../../frontdoor/quickstart-create-front-door.md).
+- [Configure a WAF policy by using Azure PowerShell](waf-front-door-custom-rules-powershell.md).
+- Learn about [Azure Web Application Firewall on Azure Front Door](afds-overview.md).
+- Learn how to [create an Azure Front Door instance](../../frontdoor/quickstart-create-front-door.md).
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Title: Azure Web Application Firewall on Azure Front Door DRS rule groups and rules
-description: This article provides information on Web Application Firewall DRS rule groups and rules.
+ Title: Azure Web Application Firewall DRS rule groups and rules
+description: This article provides information on Azure Web Application Firewall DRS rule groups and rules.
Last updated 10/25/2022
# Web Application Firewall DRS rule groups and rules
-Azure Front Door web application firewall (WAF) protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures. Default rule set also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
+Azure Web Application Firewall on Azure Front Door protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Because such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures.
+
+The Default Rule Set (DRS) also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
## Default rule sets
-The Azure-managed Default Rule Set (DRS) includes rules against the following threat categories:
+The Azure-managed DRS includes rules against the following threat categories:
- Cross-site scripting - Java attacks
The Azure-managed Default Rule Set (DRS) includes rules against the following th
The version number of the DRS increments when new attack signatures are added to the rule set.
-DRS is enabled by default in Detection mode in your WAF policies. You can disable or enable individual rules within the Default Rule Set to meet your application requirements. You can also set specific actions per rule. The available actions are: [Allow, Block, Log, and Redirect](afds-overview.md#waf-actions).
+DRS is enabled by default in Detection mode in your WAF policies. You can disable or enable individual rules within the DRS to meet your application requirements. You can also set specific actions per rule. The available actions are [Allow, Block, Log, and Redirect](afds-overview.md#waf-actions).
-Sometimes you might need to omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You may configure an exclusion list for a managed rule, rule group, or for the entire rule set. For more information, see [Web Application Firewall (WAF) with Front Door exclusion lists](./waf-front-door-exclusion.md).
+Sometimes you might need to omit certain request attributes from a web application firewall (WAF) evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You might configure an exclusion list for a managed rule, a rule group, or the entire rule set. For more information, see [Azure Web Application Firewall on Azure Front Door exclusion lists](./waf-front-door-exclusion.md).
-By default, DRS versions 2.0 and above will leverage anomaly scoring when a request matches a rule, DRS versions earlier than 2.0 blocks requests that trigger the rules. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Default Rule Set.
+By default, DRS versions 2.0 and above use anomaly scoring when a request matches a rule. DRS versions earlier than 2.0 block requests that trigger the rules. Also, custom rules can be configured in the same WAF policy if you want to bypass any of the preconfigured rules in the DRS.
-Custom rules are always applied before rules in the Default Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back-end. No other custom rules or the rules in the Default Rule Set are processed. You can also remove the Default Rule Set from your WAF policies.
+Custom rules are always applied before rules in the DRS are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back end. No other custom rules or the rules in the DRS are processed. You can also remove the DRS from your WAF policies.
### Microsoft Threat Intelligence Collection rules The Microsoft Threat Intelligence Collection rules are written in partnership with the Microsoft Threat Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
-Some of the built-in DRS rules are disabled by default because they've been replaced by newer rules in the Microsoft Threat Intelligence Collection. For example, rule ID 942440, *SQL Comment Sequence Detected.*, has been disabled, and replaced by the Microsoft Threat Intelligence Collection rule 99031002. The replaced rule reduces the risk of false positive detections from legitimate requests.
+Some of the built-in DRS rules are disabled by default because they've been replaced by newer rules in the Microsoft Threat Intelligence Collection rules. For example, rule ID 942440, *SQL Comment Sequence Detected*, has been disabled and replaced by the Microsoft Threat Intelligence Collection rule 99031002. The replaced rule reduces the risk of false positive detections from legitimate requests.
### <a name="anomaly-scoring-mode"></a>Anomaly scoring
-When you use DRS 2.0 or later, your WAF uses *anomaly scoring*. Traffic that matches any rule isn't immediately blocked, even when your WAF is in prevention mode. Instead, the OWASP rule sets define a severity for each rule: *Critical*, *Error*, *Warning*, or *Notice*. The severity affects a numeric value for the request, which is called the *anomaly score*. If a request accumulates an anomaly score of 5 or greater the WAF will take action on the request.
+When you use DRS 2.0 or later, your WAF uses *anomaly scoring*. Traffic that matches any rule isn't immediately blocked, even when your WAF is in prevention mode. Instead, the OWASP rule sets define a severity for each rule: *Critical*, *Error*, *Warning*, or *Notice*. The severity affects a numeric value for the request, which is called the *anomaly score*. If a request accumulates an anomaly score of 5 or greater, the WAF takes action on the request.
| Rule severity | Value contributed to anomaly score | |-|-|
When you use DRS 2.0 or later, your WAF uses *anomaly scoring*. Traffic that mat
| Warning | 3 | | Notice | 2 |
-When you configure your WAF, you can decide how the WAF handles requests that exceed the anomaly score threshold of 5. The three anomaly score action options are block, log, or redirect. The anomaly score action you select at time of configuration will be applied to all requests that exceed the anomaly score threshold.
+When you configure your WAF, you can decide how the WAF handles requests that exceed the anomaly score threshold of 5. The three anomaly score action options are Block, Log, or Redirect. The anomaly score action you select at the time of configuration is applied to all requests that exceed the anomaly score threshold.
-For example, if the anomaly score is 5 or greater on a request, and the WAF is in Prevention mode with the anomaly score action set to block, the request is blocked. If the anomaly score is 5 or greater on a request, and the WAF is in Detection mode, the request is logged but not blocked.
+For example, if the anomaly score is 5 or greater on a request, and the WAF is in Prevention mode with the anomaly score action set to Block, the request is blocked. If the anomaly score is 5 or greater on a request, and the WAF is in Detection mode, the request is logged but not blocked.
-A single *Critical* rule match is enough for the WAF to block a request when in Prevention mode with anomaly score action set to block, because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered it will show a "matched" action in the logs. If the anomly score is 5 or greater, there will be a separate rule triggered with the anomaly score action configured for the rule set. Default anomaly score action is block which will result in log entry with action ΓÇ£blockedΓÇ¥.
+A single *Critical* rule match is enough for the WAF to block a request when in Prevention mode with the anomaly score action set to Block because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered, it shows a "matched" action in the logs. If the anomaly score is 5 or greater, there will be a separate rule triggered with the anomaly score action configured for the rule set. Default anomaly score action is Block, which results in a log entry with the action `blocked`.
-When your WAF uses older version of the default rule set (before DRS 2.0), your WAF runs in the traditional mode. Traffic that matches any rule is considered independently of any other rule matches. In traditional mode, you don't have visibility into the complete set of rules that a specific request matched.
+When your WAF uses an older version of the Default Rule Set (before DRS 2.0), your WAF runs in the traditional mode. Traffic that matches any rule is considered independently of any other rule matches. In traditional mode, you don't have visibility into the complete set of rules that a specific request matched.
The version of the DRS that you use also determines which content types are supported for request body inspection. For more information, see [What content types does WAF support?](waf-faq.yml#what-content-types-does-waf-support-) in the FAQ. ### DRS 2.1
-DRS 2.1 rules offer better protection than earlier versions of the DRS. It includes additional rules developed by the Microsoft Threat Intelligence team and updates to signatures to reduce false positives. It also supports transformations beyond just URL decoding.
-
-DRS 2.1 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, and you can customize behavior for individual rules, rule groups, or entire rule set. For more information, see [Tuning Web Application Firewall (WAF) for Azure Front Door](waf-front-door-tuning.md).
-
+DRS 2.1 rules offer better protection than earlier versions of the DRS. It includes other rules developed by the Microsoft Threat Intelligence team and updates to signatures to reduce false positives. It also supports transformations beyond just URL decoding.
+DRS 2.1 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, and you can customize behavior for individual rules, rule groups, or an entire rule set. For more information, see [Tuning Web Application Firewall (WAF) for Azure Front Door](waf-front-door-tuning.md).
> [!NOTE] > DRS 2.1 is only available on Azure Front Door Premium. |Rule group|Description| |||
-|**[General](#general-21)**|General group|
-|**[METHOD-ENFORCEMENT](#drs911-21)**|Lock-down methods (PUT, PATCH)|
-|**[PROTOCOL-ENFORCEMENT](#drs920-21)**|Protect against protocol and encoding issues|
-|**[PROTOCOL-ATTACK](#drs921-21)**|Protect against header injection, request smuggling, and response splitting|
-|**[APPLICATION-ATTACK-LFI](#drs930-21)**|Protect against file and path attacks|
-|**[APPLICATION-ATTACK-RFI](#drs931-21)**|Protect against remote file inclusion (RFI) attacks|
-|**[APPLICATION-ATTACK-RCE](#drs932-21)**|Protect again remote code execution attacks|
-|**[APPLICATION-ATTACK-PHP](#drs933-21)**|Protect against PHP-injection attacks|
-|**[APPLICATION-ATTACK-NodeJS](#drs934-21)**|Protect against Node JS attacks|
-|**[APPLICATION-ATTACK-XSS](#drs941-21)**|Protect against cross-site scripting attacks|
-|**[APPLICATION-ATTACK-SQLI](#drs942-21)**|Protect against SQL-injection attacks|
-|**[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-21)**|Protect against session-fixation attacks|
-|**[APPLICATION-ATTACK-SESSION-JAVA](#drs944-21)**|Protect against JAVA attacks|
-|**[MS-ThreatIntel-WebShells](#drs9905-21)**|Protect against Web shell attacks|
-|**[MS-ThreatIntel-AppSec](#drs9903-21)**|Protect against AppSec attacks|
-|**[MS-ThreatIntel-SQLI](#drs99031-21)**|Protect against SQLI attacks|
-|**[MS-ThreatIntel-CVEs](#drs99001-21)**|Protect against CVE attacks|
+|[General](#general-21)|General group|
+|[METHOD-ENFORCEMENT](#drs911-21)|Lock-down methods (PUT, PATCH)|
+|[PROTOCOL-ENFORCEMENT](#drs920-21)|Protect against protocol and encoding issues|
+|[PROTOCOL-ATTACK](#drs921-21)|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-21)|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-21)|Protect against remote file inclusion (RFI) attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-21)|Protect again remote code execution attacks|
+|[APPLICATION-ATTACK-PHP](#drs933-21)|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-NodeJS](#drs934-21)|Protect against Node JS attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-21)|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-21)|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-21)|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-21)|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-21)|Protect against Web shell attacks|
+|[MS-ThreatIntel-AppSec](#drs9903-21)|Protect against AppSec attacks|
+|[MS-ThreatIntel-SQLI](#drs99031-21)|Protect against SQLI attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-21)|Protect against CVE attacks|
#### Disabled rules
-The following rules are disabled by default for DRS 2.1:
+The following rules are disabled by default for DRS 2.1.
-|Rule ID |Rule Group|Description |Details|
+|Rule ID |Rule group|Description |Details|
||||| |942110 |SQLI|SQL Injection Attack: Common Injection Testing Detected |Replaced by MSTIC rule 99031001 | |942150 |SQLI|SQL Injection Attack|Replaced by MSTIC rule 99031003 | |942260 |SQLI|Detects basic SQL authentication bypass attempts 2/3 |Replaced by MSTIC rule 99031004 |
-|942430 |SQLI|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|Too many false positives.|
+|942430 |SQLI|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|Too many false positives|
|942440 |SQLI|SQL Comment Sequence Detected|Replaced by MSTIC rule 99031002 | |99005006|MS-ThreatIntel-WebShells|Spring4Shell Interaction Attempt|Enable rule to prevent against SpringShell vulnerability| |99001014|MS-ThreatIntel-CVEs|Attempted Spring Cloud routing-expression injection [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|Enable rule to prevent against SpringShell vulnerability|
The following rules are disabled by default for DRS 2.1:
### DRS 2.0
-DRS 2.0 rules offer better protection than earlier versions of the DRS. It also supports transformations beyond just URL decoding.
+DRS 2.0 rules offer better protection than earlier versions of the DRS. DRS 2.0 also supports transformations beyond just URL decoding.
-DRS 2.0 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, and you can disable individual rules as well as entire rule groups.
+DRS 2.0 includes 17 rule groups, as shown in the following table. Each group contains multiple rules. You can disable individual rules and entire rule groups.
> [!NOTE] > DRS 2.0 is only available on Azure Front Door Premium. |Rule group|Description| |||
-|**[General](#general-20)**|General group|
-|**[METHOD-ENFORCEMENT](#drs911-20)**|Lock-down methods (PUT, PATCH)|
-|**[PROTOCOL-ENFORCEMENT](#drs920-20)**|Protect against protocol and encoding issues|
-|**[PROTOCOL-ATTACK](#drs921-20)**|Protect against header injection, request smuggling, and response splitting|
-|**[APPLICATION-ATTACK-LFI](#drs930-20)**|Protect against file and path attacks|
-|**[APPLICATION-ATTACK-RFI](#drs931-20)**|Protect against remote file inclusion (RFI) attacks|
-|**[APPLICATION-ATTACK-RCE](#drs932-20)**|Protect again remote code execution attacks|
-|**[APPLICATION-ATTACK-PHP](#drs933-20)**|Protect against PHP-injection attacks|
-|**[APPLICATION-ATTACK-NodeJS](#drs934-20)**|Protect against Node JS attacks|
-|**[APPLICATION-ATTACK-XSS](#drs941-20)**|Protect against cross-site scripting attacks|
-|**[APPLICATION-ATTACK-SQLI](#drs942-20)**|Protect against SQL-injection attacks|
-|**[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-20)**|Protect against session-fixation attacks|
-|**[APPLICATION-ATTACK-SESSION-JAVA](#drs944-20)**|Protect against JAVA attacks|
-|**[MS-ThreatIntel-WebShells](#drs9905-20)**|Protect against Web shell attacks|
-|**[MS-ThreatIntel-AppSec](#drs9903-20)**|Protect against AppSec attacks|
-|**[MS-ThreatIntel-SQLI](#drs99031-20)**|Protect against SQLI attacks|
-|**[MS-ThreatIntel-CVEs](#drs99001-20)**|Protect against CVE attacks|
+|[General](#general-20)|General group|
+|[METHOD-ENFORCEMENT](#drs911-20)|Lock-down methods (PUT, PATCH)|
+|[PROTOCOL-ENFORCEMENT](#drs920-20)|Protect against protocol and encoding issues|
+|[PROTOCOL-ATTACK](#drs921-20|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-20)|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-20)|Protect against remote file inclusion (RFI) attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-20)|Protect again remote code execution attacks|
+|[APPLICATION-ATTACK-PHP](#drs933-20)|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-NodeJS](#drs934-20)|Protect against Node JS attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-20)|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-20)|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-20)|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-20)|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-20)|Protect against Web shell attacks|
+|[MS-ThreatIntel-AppSec](#drs9903-20)|Protect against AppSec attacks|
+|[MS-ThreatIntel-SQLI](#drs99031-20)|Protect against SQLI attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-20)|Protect against CVE attacks|
### DRS 1.1 |Rule group|Description| |||
-|**[PROTOCOL-ATTACK](#drs921-11)**|Protect against header injection, request smuggling, and response splitting|
-|**[APPLICATION-ATTACK-LFI](#drs930-11)**|Protect against file and path attacks|
-|**[APPLICATION-ATTACK-RFI](#drs931-11)**|Protection against remote file inclusion attacks|
-|**[APPLICATION-ATTACK-RCE](#drs932-11)**|Protection against remote command execution|
-|**[APPLICATION-ATTACK-PHP](#drs933-11)**|Protect against PHP-injection attacks|
-|**[APPLICATION-ATTACK-XSS](#drs941-11)**|Protect against cross-site scripting attacks|
-|**[APPLICATION-ATTACK-SQLI](#drs942-11)**|Protect against SQL-injection attacks|
-|**[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-11)**|Protect against session-fixation attacks|
-|**[APPLICATION-ATTACK-SESSION-JAVA](#drs944-11)**|Protect against JAVA attacks|
-|**[MS-ThreatIntel-WebShells](#drs9905-11)**|Protect against Web shell attacks|
-|**[MS-ThreatIntel-AppSec](#drs9903-11)**|Protect against AppSec attacks|
-|**[MS-ThreatIntel-SQLI](#drs99031-11)**|Protect against SQLI attacks|
-|**[MS-ThreatIntel-CVEs](#drs99001-11)**|Protect against CVE attacks|
+|[PROTOCOL-ATTACK](#drs921-11)|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-11)|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-11)|Protection against remote file inclusion attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-11)|Protection against remote command execution|
+|[APPLICATION-ATTACK-PHP](#drs933-11)|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-11)|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-11)|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-11)|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-11)|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-11)|Protect against Web shell attacks|
+|[MS-ThreatIntel-AppSec](#drs9903-11)|Protect against AppSec attacks|
+|[MS-ThreatIntel-SQLI](#drs99031-11)|Protect against SQLI attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-11)|Protect against CVE attacks|
### DRS 1.0 |Rule group|Description| |||
-|**[PROTOCOL-ATTACK](#drs921-10)**|Protect against header injection, request smuggling, and response splitting|
-|**[APPLICATION-ATTACK-LFI](#drs930-10)**|Protect against file and path attacks|
-|**[APPLICATION-ATTACK-RFI](#drs931-10)**|Protection against remote file inclusion attacks|
-|**[APPLICATION-ATTACK-RCE](#drs932-10)**|Protection against remote command execution|
-|**[APPLICATION-ATTACK-PHP](#drs933-10)**|Protect against PHP-injection attacks|
-|**[APPLICATION-ATTACK-XSS](#drs941-10)**|Protect against cross-site scripting attacks|
-|**[APPLICATION-ATTACK-SQLI](#drs942-10)**|Protect against SQL-injection attacks|
-|**[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-10)**|Protect against session-fixation attacks|
-|**[APPLICATION-ATTACK-SESSION-JAVA](#drs944-10)**|Protect against JAVA attacks|
-|**[MS-ThreatIntel-WebShells](#drs9905-10)**|Protect against Web shell attacks|
-|**[MS-ThreatIntel-CVEs](#drs99001-10)**|Protect against CVE attacks|
+|[PROTOCOL-ATTACK](#drs921-10)|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-10)|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-10)|Protection against remote file inclusion attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-10)|Protection against remote command execution|
+|[APPLICATION-ATTACK-PHP](#drs933-10)|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-10)|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-10)|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-10)|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-10)|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-10)|Protect against Web shell attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-10)|Protect against CVE attacks|
### Bot rules |Rule group|Description| |||
-|**[BadBots](#bot100)**|Protect against bad bots|
-|**[GoodBots](#bot200)**|Identify good bots|
-|**[UnknownBots](#bot300)**|Identify unknown bots|
+|[BadBots](#bot100)|Protect against bad bots|
+|[GoodBots](#bot200)|Identify good bots|
+|[UnknownBots](#bot300)|Identify unknown bots|
-The following rule groups and rules are available when using Web Application Firewall on Azure Front Door.
+The following rule groups and rules are available when you use Azure Web Application Firewall on Azure Front Door.
# [DRS 2.1](#tab/drs21)
The following rule groups and rules are available when using Web Application Fir
### <a name="general-21"></a> General |RuleId|Description| |||
-|200002|Failed to parse request body.|
+|200002|Failed to parse request body|
|200003|Multipart request body failed strict validation|
-### <a name="drs911-21"></a> METHOD ENFORCEMENT
+### <a name="drs911-21"></a> Method enforcement
|RuleId|Description| |||
-|911100|Method is not allowed by policy|
+|911100|Method isn't allowed by policy|
-### <a name="drs920-21"></a> PROTOCOL-ENFORCEMENT
+### <a name="drs920-21"></a> Protocol enforcement
|RuleId|Description| |||
-|920100|Invalid HTTP Request Line|
-|920120|Attempted multipart/form-data bypass|
-|920121|Attempted multipart/form-data bypass|
-|920160|Content-Length HTTP header is not numeric.|
+|920100|Invalid HTTP Request Line.|
+|920120|Attempted multipart/form-data bypass.|
+|920121|Attempted multipart/form-data bypass.|
+|920160|Content-Length HTTP header isn't numeric.|
|920170|GET or HEAD Request with Body Content.| |920171|GET or HEAD Request with Transfer-Encoding.| |920180|POST request missing Content-Length Header.|
-|920181|Content-Length and Transfer-Encoding headers present 99001003|
+|920181|Content-Length and Transfer-Encoding headers present 99001003.|
|920190|Range: Invalid Last Byte Value.|
-|920200|Range: Too many fields (6 or more)|
-|920201|Range: Too many fields for pdf request (35 or more)|
+|920200|Range: Too many fields (6 or more).|
+|920201|Range: Too many fields for pdf request (35 or more).|
|920210|Multiple/Conflicting Connection Header Data Found.|
-|920220|URL Encoding Abuse Attack Attempt|
-|920230|Multiple URL Encoding Detected|
-|920240|URL Encoding Abuse Attack Attempt|
-|920260|Unicode Full/Half Width Abuse Attack Attempt|
-|920270|Invalid character in request (null character)|
-|920271|Invalid character in request (non printable characters)|
-|920280|Request Missing a Host Header|
-|920290|Empty Host Header|
-|920300|Request Missing an Accept Header|
-|920310|Request Has an Empty Accept Header|
-|920311|Request Has an Empty Accept Header|
-|920320|Missing User Agent Header|
-|920330|Empty User Agent Header|
-|920340|Request Containing Content, but Missing Content-Type header|
-|920341|Request containing content requires Content-Type header|
-|920350|Host header is a numeric IP address|
-|920420|Request content type is not allowed by policy|
-|920430|HTTP protocol version is not allowed by policy|
-|920440|URL file extension is restricted by policy|
-|920450|HTTP header is restricted by policy|
-|920470|Illegal Content-Type header|
-|920480|Request content type charset is not allowed by policy|
-|920500|Attempt to access a backup or working file|
-
-### <a name="drs921-21"></a> PROTOCOL-ATTACK
+|920220|URL Encoding Abuse Attack Attempt.|
+|920230|Multiple URL Encoding Detected.|
+|920240|URL Encoding Abuse Attack Attempt.|
+|920260|Unicode Full/Half Width Abuse Attack Attempt.|
+|920270|Invalid character in request (null character).|
+|920271|Invalid character in request (nonprintable characters).|
+|920280|Request Missing a Host Header.|
+|920290|Empty Host Header.|
+|920300|Request Missing an Accept Header.|
+|920310|Request Has an Empty Accept Header.|
+|920311|Request Has an Empty Accept Header.|
+|920320|Missing User Agent Header.|
+|920330|Empty User Agent Header.|
+|920340|Request Containing Content, but Missing Content-Type header.|
+|920341|Request containing content requires Content-Type header.|
+|920350|Host header is a numeric IP address.|
+|920420|Request content type isn't allowed by policy.|
+|920430|HTTP protocol version isn't allowed by policy.|
+|920440|URL file extension is restricted by policy.|
+|920450|HTTP header is restricted by policy.|
+|920470|Illegal Content-Type header.|
+|920480|Request content type charset isn't allowed by policy.|
+|920500|Attempt to access a backup or working file.|
+
+### <a name="drs921-21"></a> Protocol attack
|RuleId|Description| |||
The following rule groups and rules are available when using Web Application Fir
|921190|HTTP Splitting (CR/LF in request filename detected)| |921200|LDAP Injection Attack| -
-### <a name="drs930-21"></a> LFI - Local File Inclusion
+### <a name="drs930-21"></a> LFI: Local file inclusion
|RuleId|Description| ||| |930100|Path Traversal Attack (/../)|
The following rule groups and rules are available when using Web Application Fir
|930120|OS File Access Attempt| |930130|Restricted File Access Attempt|
-### <a name="drs931-21"></a> RFI - Remote File Inclusion
+### <a name="drs931-21"></a> RFI: Remote file inclusion
|RuleId|Description| |||
-|931100|Possible Remote File Inclusion (RFI) Attack: URL Parameter using IP Address|
+|931100|Possible Remote File Inclusion (RFI) Attack: URL Parameter using IP address|
|931110|Possible Remote File Inclusion (RFI) Attack: Common RFI Vulnerable Parameter Name used w/URL Payload| |931120|Possible Remote File Inclusion (RFI) Attack: URL Payload Used w/Trailing Question Mark Character (?)| |931130|Possible Remote File Inclusion (RFI) Attack: Off-Domain Reference/Link|
-### <a name="drs932-21"></a> RCE - Remote Command Execution
+### <a name="drs932-21"></a> RCE: Remote command execution
|RuleId|Description| ||| |932100|Remote Command Execution: Unix Command Injection|
The following rule groups and rules are available when using Web Application Fir
|932171|Remote Command Execution: Shellshock (CVE-2014-6271)| |932180|Restricted File Upload Attempt|
-### <a name="drs933-21"></a> PHP Attacks
+### <a name="drs933-21"></a> PHP attacks
|RuleId|Description| ||| |933100|PHP Injection Attack: Opening/Closing Tag Found|
The following rule groups and rules are available when using Web Application Fir
|933200|PHP Injection Attack: Wrapper scheme detected| |933210|PHP Injection Attack: Variable Function Call Found|
-### <a name="drs934-21"></a> Node JS Attacks
+### <a name="drs934-21"></a> Node JS attacks
|RuleId|Description| ||| |934100|Node.js Injection Attack|
-### <a name="drs941-21"></a> XSS - Cross-site Scripting
+### <a name="drs941-21"></a> XSS: Cross-site scripting
|RuleId|Description| ||| |941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
+|941101|XSS Attack Detected via libinjection<br />Rule detects requests with a `Referer` header|
|941110|XSS Filter - Category 1: Script Tag Vector| |941120|XSS Filter - Category 2: Event Handler Vector| |941130|XSS Filter - Category 3: Attribute Vector|
The following rule groups and rules are available when using Web Application Fir
|941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection| |941180|Node-Validator Blacklist Keywords|
-|941190|XSS Using style sheets|
+|941190|XSS using style sheets|
|941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript| |941220|XSS using obfuscated VB Script|
-|941230|XSS using 'embed' tag|
-|941240|XSS using 'import' or 'implementation' attribute|
-|941250|IE XSS Filters - Attack Detected.|
-|941260|XSS using 'meta' tag|
-|941270|XSS using 'link' href|
-|941280|XSS using 'base' tag|
-|941290|XSS using 'applet' tag|
-|941300|XSS using 'object' tag|
-|941310|US-ASCII Malformed Encoding XSS Filter - Attack Detected.|
+|941230|XSS using `embed` tag|
+|941240|XSS using `import` or `implementation` attribute|
+|941250|IE XSS Filters - Attack Detected|
+|941260|XSS using `meta` tag|
+|941270|XSS using `link` href|
+|941280|XSS using `base` tag|
+|941290|XSS using `applet` tag|
+|941300|XSS using `object` tag|
+|941310|US-ASCII Malformed Encoding XSS Filter - Attack Detected|
|941320|Possible XSS Attack Detected - HTML Tag Handler|
-|941330|IE XSS Filters - Attack Detected.|
-|941340|IE XSS Filters - Attack Detected.|
-|941350|UTF-7 Encoding IE XSS - Attack Detected.|
-|941360|JavaScript obfuscation detected.|
+|941330|IE XSS Filters - Attack Detected|
+|941340|IE XSS Filters - Attack Detected|
+|941350|UTF-7 Encoding IE XSS - Attack Detected|
+|941360|JavaScript obfuscation detected|
|941370|JavaScript global variable found| |941380|AngularJS client side template injection detected| >[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-### <a name="drs942-21"></a> SQLI - SQL Injection
+### <a name="drs942-21"></a> SQLI: SQL injection
|RuleId|Description| |||
-|942100|SQL Injection Attack Detected via libinjection|
-|942110|SQL Injection Attack: Common Injection Testing Detected|
-|942120|SQL Injection Attack: SQL Operator Detected|
-|942140|SQL Injection Attack: Common DB Names Detected|
-|942150|SQL Injection Attack|
-|942160|Detects blind sqli tests using sleep() or benchmark().|
-|942170|Detects SQL benchmark and sleep injection attempts including conditional queries|
-|942180|Detects basic SQL authentication bypass attempts 1/3|
-|942190|Detects MSSQL code execution and information gathering attempts|
-|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination|
-|942210|Detects chained SQL injection attempts 1/2|
-|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash|
-|942230|Detects conditional SQL injection attempts|
-|942240|Detects MySQL charset switch and MSSQL DoS attempts|
-|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections|
-|942260|Detects basic SQL authentication bypass attempts 2/3|
-|942270|Looking for basic sql injection. Common attack string for mysql, oracle, and others.|
-|942280|Detects Postgres pg_sleep injection, waitfor delay attacks and database shutdown attempts|
-|942290|Finds basic MongoDB SQL injection attempts|
-|942300|Detects MySQL comments, conditions, and ch(a)r injections|
-|942310|Detects chained SQL injection attempts 2/2|
-|942320|Detects MySQL and PostgreSQL stored procedure/function injections|
-|942330|Detects classic SQL injection probings 1/2|
-|942340|Detects basic SQL authentication bypass attempts 3/3|
-|942350|Detects MySQL UDF injection and other data/structure manipulation attempts|
-|942360|Detects concatenated basic SQL injection and SQLLFI attempts|
-|942361|Detects basic SQL injection based on keyword alter or union|
-|942370|Detects classic SQL injection probings 2/2|
-|942380|SQL Injection Attack|
-|942390|SQL Injection Attack|
-|942400|SQL Injection Attack|
-|942410|SQL Injection Attack|
-|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|
-|942440|SQL Comment Sequence Detected|
-|942450|SQL Hex Encoding Identified|
-|942460|Meta-Character Anomaly Detection Alert - Repetitive Non-Word Characters|
-|942470|SQL Injection Attack|
-|942480|SQL Injection Attack|
+|942100|SQL Injection Attack Detected via libinjection.|
+|942110|SQL Injection Attack: Common Injection Testing Detected.|
+|942120|SQL Injection Attack: SQL Operator Detected.|
+|942140|SQL Injection Attack: Common DB Names Detected.|
+|942150|SQL Injection Attack.|
+|942160|Detects blind SQLI tests using sleep() or benchmark().|
+|942170|Detects SQL benchmark and sleep injection attempts including conditional queries.|
+|942180|Detects basic SQL authentication bypass attempts 1/3.|
+|942190|Detects MSSQL code execution and information gathering attempts.|
+|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination.|
+|942210|Detects chained SQL injection attempts 1/2.|
+|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash.|
+|942230|Detects conditional SQL injection attempts.|
+|942240|Detects MySQL charset switch and MSSQL DoS attempts.|
+|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections.|
+|942260|Detects basic SQL authentication bypass attempts 2/3.|
+|942270|Looking for basic SQL injection. Common attack string for MySQL, Oracle, and others.|
+|942280|Detects Postgres pg_sleep injection, wait for delay attacks, and database shutdown attempts.|
+|942290|Finds basic MongoDB SQL injection attempts.|
+|942300|Detects MySQL comments, conditions, and ch(a)r injections.|
+|942310|Detects chained SQL injection attempts 2/2.|
+|942320|Detects MySQL and PostgreSQL stored procedure/function injections.|
+|942330|Detects classic SQL injection probings 1/2.|
+|942340|Detects basic SQL authentication bypass attempts 3/3.|
+|942350|Detects MySQL UDF injection and other data/structure manipulation attempts.|
+|942360|Detects concatenated basic SQL injection and SQLLFI attempts.|
+|942361|Detects basic SQL injection based on keyword alter or union.|
+|942370|Detects classic SQL injection probings 2/2.|
+|942380|SQL Injection Attack.|
+|942390|SQL Injection Attack.|
+|942400|SQL Injection Attack.|
+|942410|SQL Injection Attack.|
+|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12).|
+|942440|SQL Comment Sequence Detected.|
+|942450|SQL Hex Encoding Identified.|
+|942460|Meta-Character Anomaly Detection Alert - Repetitive Non-Word Characters.|
+|942470|SQL Injection Attack.|
+|942480|SQL Injection Attack.|
|942500|MySQL in-line comment detected.| |942510|SQLi bypass attempt by ticks or backticks detected.|
-### <a name="drs943-21"></a> SESSION-FIXATION
+### <a name="drs943-21"></a> Session fixation
|RuleId|Description| ||| |943100|Possible Session Fixation Attack: Setting Cookie Values in HTML| |943110|Possible Session Fixation Attack: SessionID Parameter Name with Off-Domain Referrer| |943120|Possible Session Fixation Attack: SessionID Parameter Name with No Referrer|
-### <a name="drs944-21"></a> JAVA Attacks
+### <a name="drs944-21"></a> Java attacks
|RuleId|Description| ||| |944100|Remote Command Execution: Apache Struts, Oracle WebLogic|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |99031001|SQL Injection Attack: Common Injection Testing Detected|
-|99031002|SQL Comment Sequence Detected.|
+|99031002|SQL Comment Sequence Detected|
|99031003|SQL Injection Attack| |99031004|Detects basic SQL authentication bypass attempts 2/3| ### <a name="drs99001-21"></a> MS-ThreatIntel-CVEs |RuleId|Description| |||
-|99001001|Attempted F5 tmui (CVE-2020-5902) REST API Exploitation with known credentials|
+|99001001|Attempted F5 tmui (CVE-2020-5902) REST API exploitation with known credentials|
|99001002|Attempted Citrix NSC_USER directory traversal [CVE-2019-19781](https://www.cve.org/CVERecord?id=CVE-2019-19781)| |99001003|Attempted Atlassian Confluence Widget Connector exploitation [CVE-2019-3396](https://www.cve.org/CVERecord?id=CVE-2019-3396)| |99001004|Attempted Pulse Secure custom template exploitation [CVE-2020-8243](https://www.cve.org/CVERecord?id=CVE-2019-8243)|
The following rule groups and rules are available when using Web Application Fir
|99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)| > [!NOTE]
-> When reviewing your WAF's logs, you might see rule ID 949110. The description of the rule might include *Inbound Anomaly Score Exceeded*.
+> When you review your WAF's logs, you might see rule ID 949110. The description of the rule might include *Inbound Anomaly Score Exceeded*.
> > This rule indicates that the total anomaly score for the request exceeded the maximum allowable score. For more information, see [Anomaly scoring](#anomaly-scoring-mode). >
-> When you tune your WAF policies, you need to investigate the other rules that were triggered by the request so that you can adjust your WAF's configuration. For more information, see [Tuning Web Application Firewall (WAF) for Azure Front Door](waf-front-door-tuning.md).
-
+> When you tune your WAF policies, you need to investigate the other rules that were triggered by the request so that you can adjust your WAF's configuration. For more information, see [Tuning Azure Web Application Firewall for Azure Front Door](waf-front-door-tuning.md).
# [DRS 2.0](#tab/drs20)
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |200002|Failed to parse request body.|
-|200003|Multipart request body failed strict validation|
+|200003|Multipart request body failed strict validation.|
-### <a name="drs911-20"></a> METHOD ENFORCEMENT
+### <a name="drs911-20"></a> Method enforcement
|RuleId|Description| |||
-|911100|Method is not allowed by policy|
+|911100|Method isn't allowed by policy.|
-### <a name="drs920-20"></a> PROTOCOL-ENFORCEMENT
+### <a name="drs920-20"></a> Protocol enforcement
|RuleId|Description| |||
-|920100|Invalid HTTP Request Line|
-|920120|Attempted multipart/form-data bypass|
-|920121|Attempted multipart/form-data bypass|
-|920160|Content-Length HTTP header is not numeric.|
+|920100|Invalid HTTP Request Line.|
+|920120|Attempted multipart/form-data bypass.|
+|920121|Attempted multipart/form-data bypass.|
+|920160|Content-Length HTTP header isn't numeric.|
|920170|GET or HEAD Request with Body Content.| |920171|GET or HEAD Request with Transfer-Encoding.| |920180|POST request missing Content-Length Header.| |920190|Range: Invalid Last Byte Value.|
-|920200|Range: Too many fields (6 or more)|
-|920201|Range: Too many fields for pdf request (35 or more)|
+|920200|Range: Too many fields (6 or more).|
+|920201|Range: Too many fields for pdf request (35 or more).|
|920210|Multiple/Conflicting Connection Header Data Found.|
-|920220|URL Encoding Abuse Attack Attempt|
-|920230|Multiple URL Encoding Detected|
-|920240|URL Encoding Abuse Attack Attempt|
-|920260|Unicode Full/Half Width Abuse Attack Attempt|
-|920270|Invalid character in request (null character)|
-|920271|Invalid character in request (non printable characters)|
-|920280|Request Missing a Host Header|
-|920290|Empty Host Header|
-|920300|Request Missing an Accept Header|
-|920310|Request Has an Empty Accept Header|
-|920311|Request Has an Empty Accept Header|
-|920320|Missing User Agent Header|
-|920330|Empty User Agent Header|
-|920340|Request Containing Content, but Missing Content-Type header|
-|920341|Request containing content requires Content-Type header|
-|920350|Host header is a numeric IP address|
-|920420|Request content type is not allowed by policy|
-|920430|HTTP protocol version is not allowed by policy|
-|920440|URL file extension is restricted by policy|
-|920450|HTTP header is restricted by policy|
-|920470|Illegal Content-Type header|
-|920480|Request content type charset is not allowed by policy|
-
-### <a name="drs921-20"></a> PROTOCOL-ATTACK
+|920220|URL Encoding Abuse Attack Attempt.|
+|920230|Multiple URL Encoding Detected.|
+|920240|URL Encoding Abuse Attack Attempt.|
+|920260|Unicode Full/Half Width Abuse Attack Attempt.|
+|920270|Invalid character in request (null character).|
+|920271|Invalid character in request (nonprintable characters).|
+|920280|Request Missing a Host Header.|
+|920290|Empty Host Header.|
+|920300|Request Missing an Accept Header.|
+|920310|Request Has an Empty Accept Header.|
+|920311|Request Has an Empty Accept Header.|
+|920320|Missing User Agent Header.|
+|920330|Empty User Agent Header.|
+|920340|Request Containing Content, but Missing Content-Type header.|
+|920341|Request containing content requires Content-Type header.|
+|920350|Host header is a numeric IP address.|
+|920420|Request content type isn't allowed by policy.|
+|920430|HTTP protocol version isn't allowed by policy.|
+|920440|URL file extension is restricted by policy.|
+|920450|HTTP header is restricted by policy.|
+|920470|Illegal Content-Type header.|
+|920480|Request content type charset isn't allowed by policy.|
+
+### <a name="drs921-20"></a> Protocol attack
|RuleId|Description| |||
The following rule groups and rules are available when using Web Application Fir
|921151|HTTP Header Injection Attack via payload (CR/LF detected)| |921160|HTTP Header Injection Attack via payload (CR/LF and header-name detected)|
-### <a name="drs930-20"></a> LFI - Local File Inclusion
+### <a name="drs930-20"></a> LFI: Local file inclusion
|RuleId|Description| ||| |930100|Path Traversal Attack (/../)|
The following rule groups and rules are available when using Web Application Fir
|930120|OS File Access Attempt| |930130|Restricted File Access Attempt|
-### <a name="drs931-20"></a> RFI - Remote File Inclusion
+### <a name="drs931-20"></a> RFI: Remote file inclusion
|RuleId|Description| ||| |931100|Possible Remote File Inclusion (RFI) Attack: URL Parameter using IP Address|
The following rule groups and rules are available when using Web Application Fir
|931120|Possible Remote File Inclusion (RFI) Attack: URL Payload Used w/Trailing Question Mark Character (?)| |931130|Possible Remote File Inclusion (RFI) Attack: Off-Domain Reference/Link|
-### <a name="drs932-20"></a> RCE - Remote Command Execution
+### <a name="drs932-20"></a> RCE: Remote command execution
|RuleId|Description| ||| |932100|Remote Command Execution: Unix Command Injection|
The following rule groups and rules are available when using Web Application Fir
|932171|Remote Command Execution: Shellshock (CVE-2014-6271)| |932180|Restricted File Upload Attempt|
-### <a name="drs933-20"></a> PHP Attacks
+### <a name="drs933-20"></a> PHP attacks
|RuleId|Description| ||| |933100|PHP Injection Attack: Opening/Closing Tag Found|
The following rule groups and rules are available when using Web Application Fir
|933200|PHP Injection Attack: Wrapper scheme detected| |933210|PHP Injection Attack: Variable Function Call Found|
-### <a name="drs934-20"></a> Node JS Attacks
+### <a name="drs934-20"></a> Node JS attacks
|RuleId|Description| ||| |934100|Node.js Injection Attack|
-### <a name="drs941-20"></a> XSS - Cross-site Scripting
+### <a name="drs941-20"></a> XSS: Cross-site scripting
|RuleId|Description| |||
-|941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
-|941110|XSS Filter - Category 1: Script Tag Vector|
-|941120|XSS Filter - Category 2: Event Handler Vector|
-|941130|XSS Filter - Category 3: Attribute Vector|
-|941140|XSS Filter - Category 4: JavaScript URI Vector|
-|941150|XSS Filter - Category 5: Disallowed HTML Attributes|
-|941160|NoScript XSS InjectionChecker: HTML Injection|
-|941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
-|941190|XSS Using style sheets|
-|941200|XSS using VML frames|
-|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
-|941220|XSS using obfuscated VB Script|
-|941230|XSS using 'embed' tag|
-|941240|XSS using 'import' or 'implementation' attribute|
+|941100|XSS Attack Detected via libinjection.|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a `Referer` header.|
+|941110|XSS Filter - Category 1: Script Tag Vector.|
+|941120|XSS Filter - Category 2: Event Handler Vector.|
+|941130|XSS Filter - Category 3: Attribute Vector.|
+|941140|XSS Filter - Category 4: JavaScript URI Vector.|
+|941150|XSS Filter - Category 5: Disallowed HTML Attributes.|
+|941160|NoScript XSS InjectionChecker: HTML Injection.|
+|941170|NoScript XSS InjectionChecker: Attribute Injection.|
+|941180|Node-Validator Blacklist Keywords.|
+|941190|XSS Using style sheets.|
+|941200|XSS using VML frames.|
+|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).|
+|941220|XSS using obfuscated VB Script.|
+|941230|XSS using `embed` tag.|
+|941240|XSS using `import` or `implementation` attribute.|
|941250|IE XSS Filters - Attack Detected.|
-|941260|XSS using 'meta' tag|
-|941270|XSS using 'link' href|
-|941280|XSS using 'base' tag|
-|941290|XSS using 'applet' tag|
-|941300|XSS using 'object' tag|
+|941260|XSS using `meta` tag.|
+|941270|XSS using `link` href.|
+|941280|XSS using `base` tag.|
+|941290|XSS using `applet` tag.|
+|941300|XSS using `object` tag.|
|941310|US-ASCII Malformed Encoding XSS Filter - Attack Detected.|
-|941320|Possible XSS Attack Detected - HTML Tag Handler|
+|941320|Possible XSS Attack Detected - HTML Tag Handler.|
|941330|IE XSS Filters - Attack Detected.| |941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.| |941360|JavaScript obfuscation detected.|
-|941370|JavaScript global variable found|
-|941380|AngularJS client side template injection detected|
+|941370|JavaScript global variable found.|
+|941380|AngularJS client side template injection detected.|
>[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-### <a name="drs942-20"></a> SQLI - SQL Injection
+### <a name="drs942-20"></a> SQLI: SQL injection
|RuleId|Description| |||
-|942100|SQL Injection Attack Detected via libinjection|
-|942110|SQL Injection Attack: Common Injection Testing Detected|
-|942120|SQL Injection Attack: SQL Operator Detected|
-|942140|SQL Injection Attack: Common DB Names Detected|
-|942150|SQL Injection Attack|
-|942160|Detects blind sqli tests using sleep() or benchmark().|
-|942170|Detects SQL benchmark and sleep injection attempts including conditional queries|
-|942180|Detects basic SQL authentication bypass attempts 1/3|
-|942190|Detects MSSQL code execution and information gathering attempts|
-|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination|
-|942210|Detects chained SQL injection attempts 1/2|
-|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash|
-|942230|Detects conditional SQL injection attempts|
-|942240|Detects MySQL charset switch and MSSQL DoS attempts|
-|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections|
-|942260|Detects basic SQL authentication bypass attempts 2/3|
-|942270|Looking for basic sql injection. Common attack string for mysql, oracle, and others.|
-|942280|Detects Postgres pg_sleep injection, waitfor delay attacks and database shutdown attempts|
-|942290|Finds basic MongoDB SQL injection attempts|
-|942300|Detects MySQL comments, conditions, and ch(a)r injections|
-|942310|Detects chained SQL injection attempts 2/2|
-|942320|Detects MySQL and PostgreSQL stored procedure/function injections|
-|942330|Detects classic SQL injection probings 1/2|
-|942340|Detects basic SQL authentication bypass attempts 3/3|
-|942350|Detects MySQL UDF injection and other data/structure manipulation attempts|
-|942360|Detects concatenated basic SQL injection and SQLLFI attempts|
-|942361|Detects basic SQL injection based on keyword alter or union|
-|942370|Detects classic SQL injection probings 2/2|
-|942380|SQL Injection Attack|
-|942390|SQL Injection Attack|
-|942400|SQL Injection Attack|
-|942410|SQL Injection Attack|
-|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|
+|942100|SQL Injection Attack Detected via libinjection.|
+|942110|SQL Injection Attack: Common Injection Testing Detected.|
+|942120|SQL Injection Attack: SQL Operator Detected.|
+|942140|SQL Injection Attack: Common DB Names Detected.|
+|942150|SQL Injection Attack.|
+|942160|Detects blind SQLI tests using sleep() or benchmark().|
+|942170|Detects SQL benchmark and sleep injection attempts including conditional queries.|
+|942180|Detects basic SQL authentication bypass attempts 1/3.|
+|942190|Detects MSSQL code execution and information gathering attempts.|
+|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination.|
+|942210|Detects chained SQL injection attempts 1/2.|
+|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash.|
+|942230|Detects conditional SQL injection attempts.|
+|942240|Detects MySQL charset switch and MSSQL DoS attempts.|
+|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections.|
+|942260|Detects basic SQL authentication bypass attempts 2/3.|
+|942270|Looking for basic SQL injection. Common attack string for MySQL, Oracle, and others.|
+|942280|Detects Postgres pg_sleep injection, waitfor delay attacks and database shutdown attempts.|
+|942290|Finds basic MongoDB SQL injection attempts.|
+|942300|Detects MySQL comments, conditions, and ch(a)r injections.|
+|942310|Detects chained SQL injection attempts 2/2.|
+|942320|Detects MySQL and PostgreSQL stored procedure/function injections.|
+|942330|Detects classic SQL injection probings 1/2.|
+|942340|Detects basic SQL authentication bypass attempts 3/3.|
+|942350|Detects MySQL UDF injection and other data/structure manipulation attempts.|
+|942360|Detects concatenated basic SQL injection and SQLLFI attempts.|
+|942361|Detects basic SQL injection based on keyword alter or union.|
+|942370|Detects classic SQL injection probings 2/2.|
+|942380|SQL Injection Attack.|
+|942390|SQL Injection Attack.|
+|942400|SQL Injection Attack.|
+|942410|SQL Injection Attack.|
+|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12).|
|942440|SQL Comment Sequence Detected.|
-|942450|SQL Hex Encoding Identified|
-|942460|Meta-Character Anomaly Detection Alert - Repetitive Non-Word Characters|
-|942470|SQL Injection Attack|
-|942480|SQL Injection Attack|
+|942450|SQL Hex Encoding Identified.|
+|942460|Meta-Character Anomaly Detection Alert - Repetitive Non-Word Characters.|
+|942470|SQL Injection Attack.|
+|942480|SQL Injection Attack.|
|942500|MySQL in-line comment detected.| |942510|SQLi bypass attempt by ticks or backticks detected.|
-### <a name="drs943-20"></a> SESSION-FIXATION
+### <a name="drs943-20"></a> Session fixation
|RuleId|Description| ||| |943100|Possible Session Fixation Attack: Setting Cookie Values in HTML| |943110|Possible Session Fixation Attack: SessionID Parameter Name with Off-Domain Referrer| |943120|Possible Session Fixation Attack: SessionID Parameter Name with No Referrer|
-### <a name="drs944-20"></a> JAVA Attacks
+### <a name="drs944-20"></a> Java attacks
|RuleId|Description| ||| |944100|Remote Command Execution: Apache Struts, Oracle WebLogic|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |99031001|SQL Injection Attack: Common Injection Testing Detected|
-|99031002|SQL Comment Sequence Detected.|
+|99031002|SQL Comment Sequence Detected|
### <a name="drs99001-20"></a> MS-ThreatIntel-CVEs |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
|99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947) > [!NOTE]
-> When reviewing your WAF's logs, you might see rule ID 949110. The description of the rule might include *Inbound Anomaly Score Exceeded*.
+> When you review your WAF's logs, you might see rule ID 949110. The description of the rule might include *Inbound Anomaly Score Exceeded*.
> > This rule indicates that the total anomaly score for the request exceeded the maximum allowable score. For more information, see [Anomaly scoring](#anomaly-scoring-mode).
->
-> When you tune your WAF policies, you need to investigate the other rules that were triggered by the request so that you can adjust your WAF's configuration. For more information, see [Tuning Web Application Firewall (WAF) for Azure Front Door](waf-front-door-tuning.md).
+>
+> When you tune your WAF policies, you need to investigate the other rules that were triggered by the request so that you can adjust your WAF's configuration. For more information, see [Tuning Azure Web Application Firewall for Azure Front Door](waf-front-door-tuning.md).
# [DRS 1.1](#tab/drs11) ## <a name="drs11"></a> 1.1 rule sets
-### <a name="drs921-11"></a> PROTOCOL-ATTACK
+### <a name="drs921-11"></a> Protocol attack
|RuleId|Description| ||| |921110|HTTP Request Smuggling Attack|
The following rule groups and rules are available when using Web Application Fir
|921151|HTTP Header Injection Attack via payload (CR/LF detected)| |921160|HTTP Header Injection Attack via payload (CR/LF and header-name detected)|
-### <a name="drs930-11"></a> LFI - Local File Inclusion
+### <a name="drs930-11"></a> LFI: Local file inclusion
|RuleId|Description| ||| |930100|Path Traversal Attack (/../)|
The following rule groups and rules are available when using Web Application Fir
|930120|OS File Access Attempt| |930130|Restricted File Access Attempt|
-### <a name="drs931-11"></a> RFI - Remote File Inclusion
+### <a name="drs931-11"></a> RFI: Remote file inclusion
|RuleId|Description| ||| |931100|Possible Remote File Inclusion (RFI) Attack: URL Parameter using IP Address|
The following rule groups and rules are available when using Web Application Fir
|931120|Possible Remote File Inclusion (RFI) Attack: URL Payload Used w/Trailing Question Mark Character (?)| |931130|Possible Remote File Inclusion (RFI) Attack: Off-Domain Reference/Link|
-### <a name="drs932-11"></a> RCE - Remote Command Execution
+### <a name="drs932-11"></a> RCE: Remote command execution
|RuleId|Description| ||| |932100|Remote Command Execution: Unix Command Injection|
The following rule groups and rules are available when using Web Application Fir
|932171|Remote Command Execution: Shellshock (CVE-2014-6271)| |932180|Restricted File Upload Attempt|
-### <a name="drs933-11"></a> PHP Attacks
+### <a name="drs933-11"></a> PHP attacks
|RuleId|Description| ||| |933100|PHP Injection Attack: PHP Open Tag Found|
The following rule groups and rules are available when using Web Application Fir
|933170|PHP Injection Attack: Serialized Object Injection| |933180|PHP Injection Attack: Variable Function Call Found|
-### <a name="drs941-11"></a> XSS - Cross-site Scripting
+### <a name="drs941-11"></a> XSS: Cross-site scripting
|RuleId|Description| |||
-|941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
-|941110|XSS Filter - Category 1: Script Tag Vector|
-|941120|XSS Filter - Category 2: Event Handler Vector|
-|941130|XSS Filter - Category 3: Attribute Vector|
-|941140|XSS Filter - Category 4: JavaScript URI Vector|
-|941150|XSS Filter - Category 5: Disallowed HTML Attributes|
-|941160|NoScript XSS InjectionChecker: HTML Injection|
-|941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
+|941100|XSS Attack Detected via libinjection.|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a `Referer` header.|
+|941110|XSS Filter - Category 1: Script Tag Vector.|
+|941120|XSS Filter - Category 2: Event Handler Vector.|
+|941130|XSS Filter - Category 3: Attribute Vector.|
+|941140|XSS Filter - Category 4: JavaScript URI Vector.|
+|941150|XSS Filter - Category 5: Disallowed HTML Attributes.|
+|941160|NoScript XSS InjectionChecker: HTML Injection.|
+|941170|NoScript XSS InjectionChecker: Attribute Injection.|
+|941180|Node-Validator Blacklist Keywords.|
|941190|IE XSS Filters - Attack Detected.| |941200|IE XSS Filters - Attack Detected.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) found.|
The following rule groups and rules are available when using Web Application Fir
|941290|IE XSS Filters - Attack Detected.| |941300|IE XSS Filters - Attack Detected.| |941310|US-ASCII Malformed Encoding XSS Filter - Attack Detected.|
-|941320|Possible XSS Attack Detected - HTML Tag Handler|
+|941320|Possible XSS Attack Detected - HTML Tag Handler.|
|941330|IE XSS Filters - Attack Detected.| |941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.| >[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-### <a name="drs942-11"></a> SQLI - SQL Injection
+### <a name="drs942-11"></a> SQLI: SQL injection
|RuleId|Description| |||
-|942100|SQL Injection Attack Detected via libinjection|
-|942110|SQL Injection Attack: Common Injection Testing Detected|
-|942120|SQL Injection Attack: SQL Operator Detected|
-|942140|SQL Injection Attack: Common DB Names Detected|
-|942150|SQL Injection Attack|
-|942160|Detects blind sqli tests using sleep() or benchmark().|
-|942170|Detects SQL benchmark and sleep injection attempts including conditional queries|
-|942180|Detects basic SQL authentication bypass attempts 1/3|
-|942190|Detects MSSQL code execution and information gathering attempts|
-|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination|
-|942210|Detects chained SQL injection attempts 1/2|
-|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash|
-|942230|Detects conditional SQL injection attempts|
-|942240|Detects MySQL charset switch and MSSQL DoS attempts|
-|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections|
-|942260|Detects basic SQL authentication bypass attempts 2/3|
-|942270|Looking for basic sql injection. Common attack string for mysql, oracle and others.|
-|942280|Detects Postgres pg_sleep injection, waitfor delay attacks and database shutdown attempts|
-|942290|Finds basic MongoDB SQL injection attempts|
-|942300|Detects MySQL comments, conditions and ch(a)r injections|
-|942310|Detects chained SQL injection attempts 2/2|
-|942320|Detects MySQL and PostgreSQL stored procedure/function injections|
-|942330|Detects classic SQL injection probings 1/3|
-|942340|Detects basic SQL authentication bypass attempts 3/3|
-|942350|Detects MySQL UDF injection and other data/structure manipulation attempts|
-|942360|Detects concatenated basic SQL injection and SQLLFI attempts|
-|942361|Detects basic SQL injection based on keyword alter or union|
-|942370|Detects classic SQL injection probings 2/3|
-|942380|SQL Injection Attack|
-|942390|SQL Injection Attack|
-|942400|SQL Injection Attack|
-|942410|SQL Injection Attack|
-|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|
+|942100|SQL Injection Attack Detected via libinjection.|
+|942110|SQL Injection Attack: Common Injection Testing Detected.|
+|942120|SQL Injection Attack: SQL Operator Detected.|
+|942140|SQL Injection Attack: Common DB Names Detected.|
+|942150|SQL Injection Attack.|
+|942160|Detects blind SQLI tests using sleep() or benchmark().|
+|942170|Detects SQL benchmark and sleep injection attempts including conditional queries.|
+|942180|Detects basic SQL authentication bypass attempts 1/3.|
+|942190|Detects MSSQL code execution and information gathering attempts.|
+|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination.|
+|942210|Detects chained SQL injection attempts 1/2.|
+|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash.|
+|942230|Detects conditional SQL injection attempts.|
+|942240|Detects MySQL charset switch and MSSQL DoS attempts.|
+|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections.|
+|942260|Detects basic SQL authentication bypass attempts 2/3.|
+|942270|Looking for basic SQL injection. Common attack string for MySQL, Oracle, and others.|
+|942280|Detects Postgres pg_sleep injection, waitfor delay attacks and database shutdown attempts.|
+|942290|Finds basic MongoDB SQL injection attempts.|
+|942300|Detects MySQL comments, conditions and ch(a)r injections.|
+|942310|Detects chained SQL injection attempts 2/2.|
+|942320|Detects MySQL and PostgreSQL stored procedure/function injections.|
+|942330|Detects classic SQL injection probings 1/3.|
+|942340|Detects basic SQL authentication bypass attempts 3/3.|
+|942350|Detects MySQL UDF injection and other data/structure manipulation attempts.|
+|942360|Detects concatenated basic SQL injection and SQLLFI attempts.|
+|942361|Detects basic SQL injection based on keyword alter or union.|
+|942370|Detects classic SQL injection probings 2/3.|
+|942380|SQL Injection Attack.|
+|942390|SQL Injection Attack.|
+|942400|SQL Injection Attack.|
+|942410|SQL Injection Attack.|
+|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12).|
|942440|SQL Comment Sequence Detected.|
-|942450|SQL Hex Encoding Identified|
-|942470|SQL Injection Attack|
-|942480|SQL Injection Attack|
+|942450|SQL Hex Encoding Identified.|
+|942470|SQL Injection Attack.|
+|942480|SQL Injection Attack.|
-### <a name="drs943-11"></a> SESSION-FIXATION
+### <a name="drs943-11"></a> Session fixation
|RuleId|Description| ||| |943100|Possible Session Fixation Attack: Setting Cookie Values in HTML| |943110|Possible Session Fixation Attack: SessionID Parameter Name with Off-Domain Referrer| |943120|Possible Session Fixation Attack: SessionID Parameter Name with No Referrer|
-### <a name="drs944-11"></a> JAVA Attacks
+### <a name="drs944-11"></a> Java attacks
|RuleId|Description| ||| |944100|Remote Command Execution: Suspicious Java class detected|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |99031001|SQL Injection Attack: Common Injection Testing Detected|
-|99031002|SQL Comment Sequence Detected.|
+|99031002|SQL Comment Sequence Detected|
### <a name="drs99001-11"></a> MS-ThreatIntel-CVEs |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
## <a name="drs10"></a> 1.0 rule sets
-### <a name="drs921-10"></a> PROTOCOL-ATTACK
+### <a name="drs921-10"></a> Protocol attack
|RuleId|Description| ||| |921110|HTTP Request Smuggling Attack|
The following rule groups and rules are available when using Web Application Fir
|921151|HTTP Header Injection Attack via payload (CR/LF detected)| |921160|HTTP Header Injection Attack via payload (CR/LF and header-name detected)|
-### <a name="drs930-10"></a> LFI - Local File Inclusion
+### <a name="drs930-10"></a> LFI: Local file inclusion
|RuleId|Description| ||| |930100|Path Traversal Attack (/../)|
The following rule groups and rules are available when using Web Application Fir
|930120|OS File Access Attempt| |930130|Restricted File Access Attempt|
-### <a name="drs931-10"></a> RFI - Remote File Inclusion
+### <a name="drs931-10"></a> RFI: Remote file inclusion
|RuleId|Description| ||| |931100|Possible Remote File Inclusion (RFI) Attack: URL Parameter using IP Address|
The following rule groups and rules are available when using Web Application Fir
|931120|Possible Remote File Inclusion (RFI) Attack: URL Payload Used w/Trailing Question Mark Character (?)| |931130|Possible Remote File Inclusion (RFI) Attack: Off-Domain Reference/Link|
-### <a name="drs932-10"></a> RCE - Remote Command Execution
+### <a name="drs932-10"></a> RCE: Remote command execution
|RuleId|Description| ||| |932100|Remote Command Execution: Unix Command Injection|
The following rule groups and rules are available when using Web Application Fir
|932171|Remote Command Execution: Shellshock (CVE-2014-6271)| |932180|Restricted File Upload Attempt|
-### <a name="drs933-10"></a> PHP Attacks
+### <a name="drs933-10"></a> PHP attacks
|RuleId|Description| ||| |933100|PHP Injection Attack: Opening/Closing Tag Found|
The following rule groups and rules are available when using Web Application Fir
|933170|PHP Injection Attack: Serialized Object Injection| |933180|PHP Injection Attack: Variable Function Call Found|
-### <a name="drs941-10"></a> XSS - Cross-site Scripting
+### <a name="drs941-10"></a> XSS: Cross-site scripting
|RuleId|Description| |||
-|941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
-|941110|XSS Filter - Category 1: Script Tag Vector|
-|941120|XSS Filter - Category 2: Event Handler Vector|
-|941130|XSS Filter - Category 3: Attribute Vector|
-|941140|XSS Filter - Category 4: JavaScript URI Vector|
-|941150|XSS Filter - Category 5: Disallowed HTML Attributes|
-|941160|NoScript XSS InjectionChecker: HTML Injection|
-|941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
-|941190|XSS Using style sheets|
-|941200|XSS using VML frames|
-|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
-|941220|XSS using obfuscated VB Script|
-|941230|XSS using 'embed' tag|
-|941240|XSS using 'import' or 'implementation' attribute|
+|941100|XSS Attack Detected via libinjection.|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a `Referer` header.|
+|941110|XSS Filter - Category 1: Script Tag Vector.|
+|941120|XSS Filter - Category 2: Event Handler Vector.|
+|941130|XSS Filter - Category 3: Attribute Vector.|
+|941140|XSS Filter - Category 4: JavaScript URI Vector.|
+|941150|XSS Filter - Category 5: Disallowed HTML Attributes.|
+|941160|NoScript XSS InjectionChecker: HTML Injection.|
+|941170|NoScript XSS InjectionChecker: Attribute Injection.|
+|941180|Node-Validator Blacklist Keywords.|
+|941190|XSS Using style sheets.|
+|941200|XSS using VML frames.|
+|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).|
+|941220|XSS using obfuscated VB Script.|
+|941230|XSS using `embed` tag.|
+|941240|XSS using `import` or `implementation` attribute.|
|941250|IE XSS Filters - Attack Detected.|
-|941260|XSS using 'meta' tag|
-|941270|XSS using 'link' href|
-|941280|XSS using 'base' tag|
-|941290|XSS using 'applet' tag|
-|941300|XSS using 'object' tag|
+|941260|XSS using `meta` tag.|
+|941270|XSS using `link` href.|
+|941280|XSS using `base` tag.|
+|941290|XSS using `applet` tag.|
+|941300|XSS using `object` tag.|
|941310|US-ASCII Malformed Encoding XSS Filter - Attack Detected.|
-|941320|Possible XSS Attack Detected - HTML Tag Handler|
+|941320|Possible XSS Attack Detected - HTML Tag Handler.|
|941330|IE XSS Filters - Attack Detected.| |941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.| >[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-### <a name="drs942-10"></a> SQLI - SQL Injection
+### <a name="drs942-10"></a> SQLI: SQL injection
|RuleId|Description| |||
-|942100|SQL Injection Attack Detected via libinjection|
-|942110|SQL Injection Attack: Common Injection Testing Detected|
-|942120|SQL Injection Attack: SQL Operator Detected|
-|942140|SQL Injection Attack: Common DB Names Detected|
-|942150|SQL Injection Attack|
-|942160|Detects blind sqli tests using sleep() or benchmark().|
-|942170|Detects SQL benchmark and sleep injection attempts including conditional queries|
-|942180|Detects basic SQL authentication bypass attempts 1/3|
-|942190|Detects MSSQL code execution and information gathering attempts|
-|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination|
-|942210|Detects chained SQL injection attempts 1/2|
-|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash|
-|942230|Detects conditional SQL injection attempts|
-|942240|Detects MySQL charset switch and MSSQL DoS attempts|
-|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections|
-|942260|Detects basic SQL authentication bypass attempts 2/3|
-|942270|Looking for basic sql injection. Common attack string for mysql, oracle, and others.|
-|942280|Detects Postgres pg_sleep injection, waitfor delay attacks and database shutdown attempts|
-|942290|Finds basic MongoDB SQL injection attempts|
-|942300|Detects MySQL comments, conditions, and ch(a)r injections|
-|942310|Detects chained SQL injection attempts 2/2|
-|942320|Detects MySQL and PostgreSQL stored procedure/function injections|
-|942330|Detects classic SQL injection probings 1/2|
-|942340|Detects basic SQL authentication bypass attempts 3/3|
-|942350|Detects MySQL UDF injection and other data/structure manipulation attempts|
-|942360|Detects concatenated basic SQL injection and SQLLFI attempts|
-|942361|Detects basic SQL injection based on keyword alter or union|
-|942370|Detects classic SQL injection probings 2/2|
-|942380|SQL Injection Attack|
-|942390|SQL Injection Attack|
-|942400|SQL Injection Attack|
-|942410|SQL Injection Attack|
-|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|
+|942100|SQL Injection Attack Detected via libinjection.|
+|942110|SQL Injection Attack: Common Injection Testing Detected.|
+|942120|SQL Injection Attack: SQL Operator Detected.|
+|942140|SQL Injection Attack: Common DB Names Detected.|
+|942150|SQL Injection Attack.|
+|942160|Detects blind SQLI tests using sleep() or benchmark().|
+|942170|Detects SQL benchmark and sleep injection attempts including conditional queries.|
+|942180|Detects basic SQL authentication bypass attempts 1/3.|
+|942190|Detects MSSQL code execution and information gathering attempts.|
+|942200|Detects MySQL comment-/space-obfuscated injections and backtick termination.|
+|942210|Detects chained SQL injection attempts 1/2.|
+|942220|Looking for integer overflow attacks, these are taken from skipfish, except 3.0.00738585072007e-308 is the "magic number" crash.|
+|942230|Detects conditional SQL injection attempts.|
+|942240|Detects MySQL charset switch and MSSQL DoS attempts.|
+|942250|Detects MATCH AGAINST, MERGE and EXECUTE IMMEDIATE injections.|
+|942260|Detects basic SQL authentication bypass attempts 2/3.|
+|942270|Looking for basic SQL injection. Common attack string for MySQL, Oracle, and others.|
+|942280|Detects Postgres pg_sleep injection, wait for delay attacks and database shutdown attempts.|
+|942290|Finds basic MongoDB SQL injection attempts.|
+|942300|Detects MySQL comments, conditions, and ch(a)r injections.|
+|942310|Detects chained SQL injection attempts 2/2.|
+|942320|Detects MySQL and PostgreSQL stored procedure/function injections.|
+|942330|Detects classic SQL injection probings 1/2.|
+|942340|Detects basic SQL authentication bypass attempts 3/3.|
+|942350|Detects MySQL UDF injection and other data/structure manipulation attempts.|
+|942360|Detects concatenated basic SQL injection and SQLLFI attempts.|
+|942361|Detects basic SQL injection based on keyword alter or union.|
+|942370|Detects classic SQL injection probings 2/2.|
+|942380|SQL Injection Attack.|
+|942390|SQL Injection Attack.|
+|942400|SQL Injection Attack.|
+|942410|SQL Injection Attack.|
+|942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12).|
|942440|SQL Comment Sequence Detected.|
-|942450|SQL Hex Encoding Identified|
-|942470|SQL Injection Attack|
-|942480|SQL Injection Attack|
+|942450|SQL Hex Encoding Identified.|
+|942470|SQL Injection Attack.|
+|942480|SQL Injection Attack.|
-### <a name="drs943-10"></a> SESSION-FIXATION
+### <a name="drs943-10"></a> Session fixation
|RuleId|Description| ||| |943100|Possible Session Fixation Attack: Setting Cookie Values in HTML| |943110|Possible Session Fixation Attack: SessionID Parameter Name with Off-Domain Referrer| |943120|Possible Session Fixation Attack: SessionID Parameter Name with No Referrer|
-### <a name="drs944-10"></a> JAVA Attacks
+### <a name="drs944-10"></a> Java attacks
|RuleId|Description| ||| |944100|Remote Command Execution: Apache Struts, Oracle WebLogic|
The following rule groups and rules are available when using Web Application Fir
|99001015|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)| |99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)| - # [Bot rules](#tab/bot)
-## <a name="bot"></a> Bot Manager rule sets
+## <a name="bot"></a> Bot manager rule sets
### <a name="bot100"></a> Bad bots |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
|Bot100100|Malicious bots detected by threat intelligence| |Bot100200|Malicious bots that have falsified their identity|
- Bot100100 scans both client IP addresses and IPs in the X-Forwarded-For header.
+ Bot100100 scans both client IP addresses and IPs in the `X-Forwarded-For` header.
### <a name="bot200"></a> Good bots |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
||| |Bot300100|Unspecified identity| |Bot300200|Tools and frameworks for web crawling and attacks|
-|Bot300300|General purpose HTTP clients and SDKs|
+|Bot300300|General-purpose HTTP clients and SDKs|
|Bot300400|Service agents| |Bot300500|Site health monitoring services| |Bot300600|Unknown bots detected by threat intelligence| |Bot300700|Other bots|
-Bot300600 scans both client IP addresses and IPs in the X-Forwarded-For header.
+Bot300600 scans both client IP addresses and IPs in the `X-Forwarded-For` header.
- ## Next steps -- [Custom rules for Web Application Firewall with Azure Front Door](waf-front-door-custom-rules.md)
+- [Custom rules for Azure Web Application Firewall on Azure Front Door](waf-front-door-custom-rules.md)
- [Learn more about Azure network security](../../networking/security/index.yml)
web-application-firewall Waf Front Door Tutorial Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tutorial-geo-filtering.md
Title: Configure geo-filtering web application firewall policy for Azure Front Door service
-description: In this tutorial, you learn how to create a geo-filtering policy and associate the policy with your existing Front Door frontend host.
+ Title: Configure a geo-filtering WAF policy for Azure Front Door
+description: In this tutorial, you learn how to create a geo-filtering policy and associate the policy with your existing Azure Front Door front-end host.
-# Set up a geo-filtering WAF policy for your Front Door
+# Set up a geo-filtering WAF policy for Azure Front Door
-This tutorial shows how to use Azure PowerShell to create a sample geo-filtering policy and associate the policy with your existing Front Door frontend host. This sample geo-filtering policy will block requests from all other countries/regions except United States.
+This tutorial shows how to use Azure PowerShell to create a sample geo-filtering policy and associate the policy with your existing Azure Front Door front-end host. This sample geo-filtering policy blocks requests from all other countries or regions except the United States.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now. ## Prerequisites
-Before you begin to set up a geo-filter policy, set up your PowerShell environment and create a Front Door profile.
+Before you begin to set up a geo-filter policy, set up your PowerShell environment and create an Azure Front Door profile.
+ ### Set up your PowerShell environment
-Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
+Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
-You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page, to sign in with your Azure credentials, and install the Az PowerShell module.
+You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page to sign in with your Azure credentials. Then install the Az PowerShell module.
-#### Connect to Azure with an interactive dialog for sign in
+#### Connect to Azure with an interactive dialog for sign-in
``` Install-Module -Name Az Connect-AzAccount ```
-Make sure you have the current version of PowerShellGet installed. Run below command and reopen PowerShell.
+
+Make sure you have the current version of PowerShellGet installed. Run the following command and reopen PowerShell.
``` Install-Module PowerShellGet -Force -AllowClobber ```
-#### Install Az.FrontDoor module
+
+#### Install the Az.FrontDoor module
``` Install-Module -Name Az.FrontDoor ```
-### Create a Front Door profile
+### Create an Azure Front Door profile
-Create a Front Door profile by following the instructions described in [Quickstart: Create a Front Door profile](../../frontdoor/quickstart-create-front-door.md).
+Create an Azure Front Door profile by following the instructions described in [Quickstart: Create an Azure Front Door profile](../../frontdoor/quickstart-create-front-door.md).
-## Define geo-filtering match condition
+## Define a geo-filtering match condition
-Create a sample match condition that selects requests not coming from "US" using [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject) on parameters when creating a match condition.
-Two letter country/region codes to country/region mapping are provided in [What is geo-filtering on a domain for Azure Front Door?](waf-front-door-geo-filtering.md).
+Create a sample match condition that selects requests not coming from "US" by using [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject) on parameters when you create a match condition.
+
+Two-letter country or region codes to country or region mapping are provided in [What is geo-filtering on a domain for Azure Front Door?](waf-front-door-geo-filtering.md).
```azurepowershell-interactive $nonUSGeoMatchCondition = New-AzFrontDoorWafMatchConditionObject `
$nonUSGeoMatchCondition = New-AzFrontDoorWafMatchConditionObject `
-NegateCondition $true ` -MatchValue "US" ```
-
-## Add geo-filtering match condition to a rule with Action and Priority
-Create a CustomRule object `nonUSBlockRule` based on the match condition, an Action, and a Priority using [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject). A CustomRule can have multiple MatchCondition. In this example, Action is set to Block and Priority to 1, the highest priority.
+## Add a geo-filtering match condition to a rule with an action and a priority
+
+Create a `CustomRule` object `nonUSBlockRule` based on the match condition, an action, and a priority by using [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject). A custom rule can have multiple match conditions. In this example, `Action` is set to `Block`. `Priority` is set to `1`, which is the highest priority.
``` $nonUSBlockRule = New-AzFrontDoorWafCustomRuleObject `
$nonUSBlockRule = New-AzFrontDoorWafCustomRuleObject `
## Add rules to a policy
-Find the name of the resource group that contains the Front Door profile using `Get-AzResourceGroup`. Next, create a `geoPolicy` policy object containing `nonUSBlockRule` using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) in the specified resource group that contains the Front Door profile. You must provide a unique name for the geo policy.
+Find the name of the resource group that contains the Azure Front Door profile by using `Get-AzResourceGroup`. Next, create a `geoPolicy` object that contains `nonUSBlockRule` by using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) in the specified resource group that contains the Azure Front Door profile. You must provide a unique name for the geo policy.
-The following example uses the Resource Group name *myResourceGroupFD1* with the assumption that you've created the Front Door profile using instructions provided in the [Quickstart: Create a Front Door](../../frontdoor/quickstart-create-front-door.md) article. In the below example, replace the policy name *geoPolicyAllowUSOnly* with a unique policy name.
+The following example uses the resource group name `myResourceGroupFD1` with the assumption that you've created the Azure Front Door profile by using instructions provided in [Quickstart: Create an Azure Front Door](../../frontdoor/quickstart-create-front-door.md). In the following example, replace the policy name `geoPolicyAllowUSOnly` with a unique policy name.
``` $geoPolicy = New-AzFrontDoorWafPolicy `
$geoPolicy = New-AzFrontDoorWafPolicy `
-EnabledState Enabled ```
-## Link WAF policy to a Front Door frontend host
+## Link a WAF policy to an Azure Front Door front-end host
-Link the WAF policy object to the existing Front Door frontend host and update Front Door properties.
+Link the WAF policy object to the existing Azure Front Door front-end host. Update Azure Front Door properties.
-To do so, first retrieve your Front Door object using [Get-AzFrontDoor](/powershell/module/az.frontdoor/get-azfrontdoor).
+To do so, first retrieve your Azure Front Door object by using [Get-AzFrontDoor](/powershell/module/az.frontdoor/get-azfrontdoor).
``` $geoFrontDoorObjectExample = Get-AzFrontDoor -ResourceGroupName myResourceGroupFD1 $geoFrontDoorObjectExample[0].FrontendEndpoints[0].WebApplicationFirewallPolicyLink = $geoPolicy.Id ```
-Next, set the frontend WebApplicationFirewallPolicyLink property to the resourceId of the `geoPolicy`using [Set-AzFrontDoor](/powershell/module/az.frontdoor/set-azfrontdoor).
+Next, set the front-end `WebApplicationFirewallPolicyLink` property to the resource ID of the geo policy by using [Set-AzFrontDoor](/powershell/module/az.frontdoor/set-azfrontdoor).
``` Set-AzFrontDoor -InputObject $geoFrontDoorObjectExample[0] ```
-> [!NOTE]
-> You only need to set WebApplicationFirewallPolicyLink property once to link a WAF policy to a Front Door frontend host. Subsequent policy updates are automatically applied to the frontend host.
+> [!NOTE]
+> You only need to set the `WebApplicationFirewallPolicyLink` property once to link a WAF policy to an Azure Front Door front-end host. Subsequent policy updates are automatically applied to the front-end host.
## Next steps -- Learn about [Azure web application firewall](../overview.md).-- Learn how to [create a Front Door](../../frontdoor/quickstart-create-front-door.md).
+- Learn about [Azure Web Application Firewall](../overview.md).
+- Learn how to [create an instance of Azure Front Door](../../frontdoor/quickstart-create-front-door.md).