Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Azure Ad B2c Global Identity Funnel Based Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-b2c-global-identity-funnel-based-design.md | -In this article, we describe the scenarios for funnel-based design approach. Before starting to design, it's recommended that you review the [capabilities](azure-ad-b2c-global-identity-solutions.md#capabilities-and-considerations), and [performance](azure-ad-b2c-global-identity-solutions.md#performance) of both funnel and region-based design approach. +In this article, we describe the scenarios for funnel-based design approach. Before starting to design, it's recommended that you review the [capabilities](azure-ad-b2c-global-identity-solutions.md#capabilities-and-considerations), and [performance](azure-ad-b2c-global-identity-solutions.md#performance) of both funnel and region-based design approach. This article will further help determine which design may fit best for your organization. The designs account for: This use case demonstrates how a user from their home country/region performs a  -1. User from Europe, Middle East, and Africa (EMEA) attempts to sign up at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from Europe, Middle East, and Africa (EMEA) attempts to sign up at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the Global Funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on defined criteria using OpenId federation. This can be a lookup based on Application clientId. +1. The user reaches the Global Funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on defined criteria using OpenId federation. This can be a lookup based on Application clientId. 1. The user attempts to sign up. The sign-up process checks the global lookup table to determine if the user exists in any of the regional Azure AD B2C tenants. This use case demonstrates how a user re-registering the same email from their o  -1. User from EMEA attempts to sign up at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from EMEA attempts to sign up at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the Global Funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. +1. The user reaches the Global Funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. 1. The user attempts to sign up. The sign-up process checks the global lookup table to determine if the user exists in any of the regional Azure AD B2C tenants. This use case demonstrates how a user from their home country/region performs a  -1. User from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. -1. User enters their credentials at the regional tenant. +1. The user enters their credentials at the regional tenant. 1. The regional tenant issues a token back to the funnel tenant. This use case demonstrates how a user can travel across regions and maintain the  -1. User from North America (NOAM) attempts to sign in at **myapp.fr** since there's a holiday in France. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from North America (NOAM) attempts to sign in at **myapp.fr** while they are on holiday in France. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. -1. User enters their credentials at the regional tenant. +1. The user enters their credentials at the regional tenant. 1. The regional tenant performs a lookup into the global lookup table, since the userΓÇÖs email wasn't found in the EMEA Azure AD B2C directory. This use case demonstrates how a user can reset their password when they are wit  -1. User from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. 1. The user arrives at the EMEA Azure AD B2C tenant and selects **forgot password**. The user enters and verifies their email. This use case demonstrates how a user can reset their password when they're trav  -1. User from NOAM attempts to sign in at **myapp.fr** since they are on holiday in France. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from NOAM attempts to sign in at **myapp.fr** since they are on holiday in France. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. 1. The user arrives at the EMEA Azure AD B2C tenant and selects **forgot password**. The user enters and verifies their email. This use case demonstrates how a user can change their password after they've lo  -1. User from EMEA attempts selects **change password** after logging into **myapp.fr**. +1. A user from EMEA attempts selects **change password** after logging into **myapp.fr**. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. 1. The user arrives at the EMEA Azure AD B2C tenant, and the Single-Sign On (SSO) cookie set allows the user to change their password immediately. This use case demonstrates how a user can change their password after they've lo  -1. User from NOAM attempts **change password** after logging into **myapp.fr**. +1. A user from NOAM attempts **change password** after logging into **myapp.fr**. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. 1. The user arrives at the EMEA Azure AD B2C tenant, and the SSO cookie set allows the user to change their password immediately. The following use cases show examples of using federated identities to sign up o ### Local federated ID sign-up -This use case demonstrates how a user from their local region signs up to the service using a federated ID. +This use case demonstrates how a user can sign up to the service from their local region using a federated ID.  -1. User from EMEA attempts to sign up at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from EMEA attempts to sign up at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on application clientId. -1. User selects to sign in with a federated Identity Provider (IdP). +1. The user selects to sign in with a federated Identity Provider (IdP). 1. Perform a lookup into the global lookup table. * **If account linking is in scope**: Proceed if the federated IdP identifier nor the email that came back from the federated IdP doesn't exist in the lookup table. This use case demonstrates how a user from their local region signs into the ser  -1. User from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -2. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. +2. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. -3. User selects to sign in with a federated identity provider. +3. The user selects to sign in with a federated identity provider. 4. Perform a lookup into the global lookup table and confirm the userΓÇÖs federated ID is registered in EMEA. This use case demonstrates how a user from their local region signs into the ser ### Traveling federated user sign-in -This use case demonstrates how a user located away from the region in which they signed up signs into the service using a federated IdP. +This use case demonstrates how a user can sign into their account with a federated IdP, whilst located away from the region in which they signed up in.  -1. User from NOAM attempts to sign in at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from NOAM attempts to sign in at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. -1. User selects to sign in with a federated identity provider. +1. The user selects to sign in with a federated identity provider. >[!NOTE] >Use the same App Id from the App Registration at the Social IdP across all Azure AD B2C regional tenants. This ensures that the ID coming back from the Social IdP is always the same. This use case demonstrates how a user located away from the region in which they ### Account linking with matching criteria -This use case demonstrates how users are able to perform account linking when matching criteria is satisfied. The matching criteria is typically the users email address. +This use case demonstrates how users are able to perform account linking when matching criteria is satisfied. The matching criteria is typically the users email address. When the matching criteria of a sign in from a new identity provider has the same value for an existing account in Azure AD B2C, the account linking process can begin.  -1. User from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from EMEA attempts to sign in at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. +1. The user reaches the global funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. -1. User selects to sign in with a federated identity provider/social IdP. +1. The user selects to sign in with a federated identity provider/social IdP. 1. A lookup is performed into the global lookup table for the ID returned from the federated IdP. This use case demonstrates how users are able to perform account linking when ma ### Traveling user account linking with matching criteria -This use case demonstrates how non-local users are able to perform account linking when matching criteria is satisfied. The matching criteria is typically the users email address. +This use case demonstrates how non-local users are able to perform account linking when matching criteria is satisfied. The matching criteria is typically the users email address. When the matching criteria of a sign in from a new identity provider has the same value for an existing account in Azure AD B2C, the account linking process can begin.  -1. User from NOAM attempts to sign in at **myapp.fr**. If the user isn't being sent to their local hostname, the traffic manager will enforce a redirect. +1. A user from NOAM attempts to sign in at **myapp.fr**. If the user isn't being sent to their local appication instance, the traffic manager will enforce a redirect. -1. User reaches the Global Funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. +1. The user reaches the Global Funnel Azure AD B2C tenant. This tenant is configured to redirect to a regional Azure AD B2C tenant based on some criteria using OpenId federation. This can be a lookup based on Application clientId. -1. User selects to sign in with a federated identity provider/social IdP. +1. The user selects to sign in with a federated identity provider/social IdP. 1. A lookup is performed into the global lookup table for the ID returned from the federated IdP. -1. Where the ID doesn't exist, and the email from the federated IdP exists in another region -this is a traveling user account linking use case. +1. Where the ID doesn't exist, and the email from the federated IdP exists in another region - this is a traveling user account linking use case. 1. Create an id_token_hint link asserting the users currently collected claims. Bootstrap a journey into the NOAM Azure AD B2C tenant using federation. The user will prove that they own the account via the NOAM Azure AD B2C tenant. >[!NOTE] |
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | In summary, you'll use Azure Lighthouse to allow a user or group in your Azure A First, create, or choose a resource group that contains the destination Log Analytics workspace that will receive data from Azure AD B2C. You'll specify the resource group name when you deploy the Azure Resource Manager template. 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Make sure you're using the directory that contains your Azure AD tenant. Select the **Directories + subscriptions** icon in the portal toolbar. +1. Make sure you're using the directory that contains your *Azure AD* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. 1. [Create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) or choose an existing one. This example uses a resource group named _azure-ad-b2c-monitor_. First, create, or choose a resource group that contains the destination Log Anal A **Log Analytics workspace** is a unique environment for Azure Monitor log data. You'll use this Log Analytics workspace to collect data from Azure AD B2C [audit logs](view-audit-logs.md), and then visualize it with queries and workbooks, or create alerts. 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Make sure you're using the directory that contains your Azure AD tenant. Select the **Directories + subscriptions** icon in the portal toolbar. +1. Make sure you're using the directory that contains your *Azure AD* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. 1. [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). This example uses a Log Analytics workspace named _AzureAdB2C_, in a resource group named _azure-ad-b2c-monitor_. In this step, you choose your Azure AD B2C tenant as a **service provider**. You First, get the **Tenant ID** of your Azure AD B2C directory (also known as the directory ID). 1. Sign in to the [Azure portal](https://portal.azure.com/).-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar. +1. Make sure you're using the directory that contains your *Azure AD B2C* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. 1. Select **Azure Active Directory**, select **Overview**. 1. Record the **Tenant ID**. To make management easier, we recommend using Azure AD user _groups_ for each ro To create the custom authorization and delegation in Azure Lighthouse, we use an Azure Resource Manager template. This template grants Azure AD B2C access to the Azure AD resource group, which you created earlier, for example, _azure-ad-b2c-monitor_. Deploy the template from the GitHub sample by using the **Deploy to Azure** button, which opens the Azure portal and lets you configure and deploy the template directly in the portal. For these steps, make sure you're signed in to your Azure AD tenant (not the Azure AD B2C tenant). 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Make sure you're using the directory that contains your Azure AD tenant. Select the **Directories + subscriptions** icon in the portal toolbar. +1. Make sure you're using the directory that contains your *Azure AD tenant*. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. 1. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](../lighthouse/how-to/onboard-customer.md#create-an-azure-resource-manager-template). You're ready to [create diagnostic settings](../active-directory/reports-monitor To configure monitoring settings for Azure AD B2C activity logs: -1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure AD B2C administrative account. This account must be a member of the security group you specified in the [Select a security group](#32-select-a-security-group) step. +1. Sign in to the [Azure portal](https://portal.azure.com/) with your *Azure AD B2C* administrative account. This account must be a member of the security group you specified in the [Select a security group](#32-select-a-security-group) step. 1. Make sure you're using the directory that contains your Azure AD B2C tenant: 1. Select the **Directories + subscriptions** icon in the portal toolbar. 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. Now you can configure your Log Analytics workspace to visualize your data and co Log queries help you to fully use the value of the data collected in Azure Monitor Logs. A powerful query language allows you to join data from multiple tables, aggregate large sets of data, and perform complex operations with minimal code. Virtually any question can be answered and analysis performed as long as the supporting data has been collected, and you understand how to construct the right query. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md). +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Make sure you're using the directory that contains your *Azure AD* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. From **Log Analytics workspace** window, select **Logs** 1. In the query editor, paste the following [Kusto Query Language](/azure/data-explorer/kusto/query/) query. This query shows policy usage by operation over the past x days. The default duration is set to 90 days (90d). Notice that the query is focused only on the operation where a token/code is issued by policy. Workbooks provide a flexible canvas for data analysis and the creation of rich v Follow the instructions below to create a new workbook using a JSON Gallery Template. This workbook provides a **User Insights** and **Authentication** dashboard for Azure AD B2C tenant. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Make sure you're using the directory that contains your *Azure AD* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. From the **Log Analytics workspace** window, select **Workbooks**. 1. From the toolbar, select **+ New** option to create a new workbook. 1. On the **New workbook** page, select the **Advanced Editor** using the **</>** option on the toolbar. Alerts are created by alert rules in Azure Monitor and can automatically run sav Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md#configure-notifications) whenever there's a 25% drop in the **Total Requests** compared to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before it. The alerts are created using Kusto query language. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Make sure you're using the directory that contains your *Azure AD* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. From **Log Analytics workspace**, select **Logs**. 1. Create a new **Kusto query** by using the query below. |
active-directory | Concept Authentication Passwordless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md | The following process is used when a user signs in with a FIDO2 security key: ### FIDO2 security key providers -The following providers offer FIDO2 security keys of different form factors that are known to be compatible with the passwordless experience. We encourage you to evaluate the security properties of these keys by contacting the vendor as well as FIDO Alliance. +The following providers offer FIDO2 security keys of different form factors that are known to be compatible with the passwordless experience. We encourage you to evaluate the security properties of these keys by contacting the vendor as well as the [FIDO Alliance](https://fidoalliance.org/). | Provider | Biometric | USB | NFC | BLE | FIPS Certified | Contact | ||:--:|::|::|::|:--:|--| |
active-directory | Howto Authentication Passwordless Security Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md | There are some optional settings on the **Configure** tab to help manage how sec  - **Allow self-service set up** should remain set to **Yes**. If set to no, your users won't be able to register a FIDO key through the MySecurityInfo portal, even if enabled by Authentication Methods policy. -- **Enforce attestation** setting to **Yes** requires the FIDO security key metadata to be published and verified with the FIDO Alliance Metadata Service, and also pass MicrosoftΓÇÖs additional set of validation testing. For more information, see [What is a Microsoft-compatible security key?](/windows/security/identity-protection/hello-for-business/microsoft-compatible-security-key)+- **Enforce attestation** setting to **Yes** requires the FIDO security key metadata to be published and verified with the FIDO Alliance Metadata Service, and also pass MicrosoftΓÇÖs additional set of validation testing. For more information, see [What is a Microsoft-compatible security key?](concept-authentication-passwordless.md#fido2-security-key-providers) **Key Restriction Policy** |
active-directory | How To Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md | For reference, your code should look like the following snippet: </configuration> ``` -For more information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://blogs.technet.microsoft.com/enterprisemobility/2014/06/28/aad-password-sync-encryption-and-fips-compliance/). -+For information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/aad-password-sync-encryption-and-fips-compliance/ba-p/243709). ## Next steps |
active-directory | Concept Conditional Access Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md | We don't support selecting macOS or Linux device platforms when selecting **Requ ## Locations -When configuring location as a condition, organizations can choose to include or exclude locations. These named locations may include the public IPv4 network information, country or region, or even unknown areas that don't map to specific countries or regions. Only IP ranges can be marked as a trusted location. +When configuring location as a condition, organizations can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, or even unknown areas that don't map to specific countries or regions. Only IP ranges can be marked as a trusted location. When including **any location**, this option includes any IP address on the internet not just configured named locations. When selecting **any location**, administrators can choose to exclude **all trusted** or **selected locations**. |
active-directory | Concept Continuous Access Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md | This process enables the scenario where users lose access to organizational file > \* Token lifetimes for Office web apps are reduced to 1 hour when a Conditional Access policy is set. +> [!NOTE] +> Teams is made up of multiple services and among these the calls and chat services don't adhere to IP-based Conditional Access policies. + ## Client Capabilities ### Client-side claim challenge -Before continuous access evaluation, clients would replay the access token from its cache as long as it hadn't expired. With CAE, we introduce a new case where a resource provider can reject a token when it isn't expired. To inform clients to bypass their cache even though the cached tokens haven't expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest version of the following applications below support claim challenge: +Before continuous access evaluation, clients would replay the access token from its cache as long as it hadn't expired. With CAE, we introduce a new case where a resource provider can reject a token when it isn't expired. To inform clients to bypass their cache even though the cached tokens haven't expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest version of the following applications support claim challenge: | | Web | Win32 | iOS | Android | Mac | | : | :: | :: | :: | :: | :: | The following table summarizes Conditional Access and CAE feature behaviors and | Network Type | Example | IPs seen by Azure AD | IPs seen by RP | Applicable CA Configuration (Trusted Named Location) | CAE enforcement | CAE access token | Recommendations | ||||||||| | 1. Egress IPs are dedicated and enumerable for both Azure AD and all RPs traffic | All to network traffic to Azure AD and RPs egresses through 1.1.1.1 and/or 2.2.2.2 | 1.1.1.1 | 2.2.2.2 | 1.1.1.1 <br> 2.2.2.2 | Critical Events <br> IP location Changes | Long lived ΓÇô up to 28 hours | If CA Named Locations are defined, ensure that they contain all possible egress IPs (seen by Azure AD and all RPs) |-| 2. Egress IPs are dedicated and enumerable for Azure AD, but not for RPs traffic | Network traffic to Azure AD egresses through 1.1.1.1. RP traffic egresses through x.x.x.x | 1.1.1.1 | x.x.x.x | 1.1.1.1 | Critical Events | Default access token lifetime ΓÇô 1 hour | Do not add non dedicated or non-enumerable egress IPs (x.x.x.x) into Trusted Named Location CA rules as it can weaken security | -| 3. Egress IPs are non-dedicated/shared or not enumerable for both Azure AD and RPs traffic | Network traffic to Azure AD egresses through y.y.y.y. RP traffic egresses through x.x.x.x | y.y.y.y | x.x.x.x | N/A -no IP CA policies/Trusted Locations configured | Critical Events | Long lived ΓÇô up to 28 hours | Do not add non dedicated or non-enumerable egress IPs (x.x.x.x/y.y.y.y) into Trusted Named Location CA rules as it can weaken security | +| 2. Egress IPs are dedicated and enumerable for Azure AD, but not for RPs traffic | Network traffic to Azure AD egresses through 1.1.1.1. RP traffic egresses through x.x.x.x | 1.1.1.1 | x.x.x.x | 1.1.1.1 | Critical Events | Default access token lifetime ΓÇô 1 hour | Do not add non dedicated or non-enumerable egress IPs (x.x.x.x) into Trusted Named Location Conditional Access rules as it can weaken security | +| 3. Egress IPs are non-dedicated/shared or not enumerable for both Azure AD and RPs traffic | Network traffic to Azure AD egresses through y.y.y.y. RP traffic egresses through x.x.x.x | y.y.y.y | x.x.x.x | N/A -no IP CA policies/Trusted Locations configured | Critical Events | Long lived ΓÇô up to 28 hours | Don't add non dedicated or non-enumerable egress IPs (x.x.x.x/y.y.y.y) into Trusted Named Location CA rules as it can weaken security | Networks and network services used by clients connecting to identity and resource providers continue to evolve and change in response to modern trends. These changes may affect Conditional Access and CAE configurations that rely on the underlying IP addresses. When deciding on these configurations, factor in future changes in technology and upkeep of the defined list of addresses in your plan. If you enable a user right after disabling, there's some latency before the acco ### Push notifications -An IP address policy isn't evaluated before push notifications are released. This scenario exists because push notifications are outbound and don't have an associated IP address to be evaluated against. If a user clicks into that push notification, for example an email in Outlook, CAE IP address policies are still enforced before the email can display. Push notifications display a message preview, which isn't protected by an IP address policy. All other CAE checks are done before the push notification being sent. If a user or device has its access removed, enforcement occurs within the documented period. +An IP address policy isn't evaluated before push notifications are released. This scenario exists because push notifications are outbound and don't have an associated IP address to be evaluated against. If a user selects that push notification, for example an email in Outlook, CAE IP address policies are still enforced before the email can display. Push notifications display a message preview, which isn't protected by an IP address policy. All other CAE checks are done before the push notification being sent. If a user or device has its access removed, enforcement occurs within the documented period. ++### Guest users -## FAQs +Guest user accounts aren't supported by CAE. CAE revocation events and IP based Conditional Access policies aren't enforced instantaneously. ### How will CAE work with Sign-in Frequency? |
active-directory | Howto Conditional Access Policy Authentication Strength External | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md | In external user scenarios, the MFA authentication methods that a resource tenan > [!NOTE] > Currently, you can only apply authentication strength policies to external users who authenticate with Azure AD. For email one-time passcode, SAML/WS-Fed, and Google federation users, use the [MFA grant control](concept-conditional-access-grant.md#require-multi-factor-authentication) to require MFA.+ ## Configure cross-tenant access settings to trust MFA Authentication strength policies work together with [MFA trust settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#to-change-inbound-trust-settings-for-mfa-and-device-claims) in your cross-tenant access settings to determine where and how the external user must perform MFA. An Azure AD user first authenticates with their own account in their home tenant. Then when this user tries to access your resource, Azure AD applies the authentication strength Conditional Access policy and checks to see if you've enabled MFA trust. -- **If MFA trust is enabled**, Azure AD checks the user's authentication session for a claim indicating that MFA has been fulfilled in the user's home tenant. The table below indicates which authentication methods are acceptable for MFA fulfillment when completed in an external user's home tenant.-- **If MFA trust is disabled**, the resource tenant presents the user with a challenge to complete MFA in the resource tenant using an acceptable authentication method. The table below shows which authentication methods are acceptable for MFA fulfillment by an external user.+- **If MFA trust is enabled**, Azure AD checks the user's authentication session for a claim indicating that MFA has been fulfilled in the user's home tenant. +- **If MFA trust is disabled**, the resource tenant presents the user with a challenge to complete MFA in the resource tenant using an acceptable authentication method. ++The authentication methods that external users can use to satisfy MFA requirements are different depending on whether the user is completing MFA in their home tenant or the resource tenant. See the table in [Conditional Access authentication strength](https://aka.ms/b2b-auth-strengths). > [!IMPORTANT] > Before you create the Conditional Access policy, check your cross-tenant access settings to make sure your inbound MFA trust settings are configured as intended.+ ## Choose an authentication strength Determine if one of the built-in authentication strengths will work for your scenario or if you'll need to create a custom authentication strength. Determine if one of the built-in authentication strengths will work for your sce 1. Review the built-in authentication strengths to see if one of them meets your requirements. 1. If you want to enforce a different set of authentication methods, [create a custom authentication strength](https://aka.ms/b2b-auth-strengths). -> [!NOTE] -> The authentication methods that external users can use to satisfy MFA requirements are different depending on whether the user is completing MFA in their home tenant or the resource tenant. See the table in [Conditional Access authentication strength](https://aka.ms/b2b-auth-strengths). - ## Create a Conditional Access policy Use the following steps to create a Conditional Access policy that applies an authentication strength to external users. |
active-directory | Msal Android Handling Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-handling-exceptions.md | Title: Errors and exceptions (MSAL Android) + Title: Errors and exceptions (MSAL Android) description: Learn how to handle errors and exceptions, Conditional Access, and claims challenges in MSAL Android applications. -Exceptions in the Microsoft Authentication Library (MSAL) are intended to help app developers troubleshoot their application. Exception messages are not localized. +Exceptions in the Microsoft Authentication Library (MSAL) are intended to help app developers troubleshoot their application. Exception messages aren't localized. -When processing exceptions and errors, you can use the exception type itself and the error code to distinguish between exceptions. For a list of error codes, see [Authentication and authorization error codes](reference-aadsts-error-codes.md). +When processing exceptions and errors, you can use the exception type itself and the error code to distinguish between exceptions. For a list of error codes, see [Authentication and authorization error codes](reference-aadsts-error-codes.md). During the sign-in experience, you may encounter errors about consents, Conditional Access (MFA, Device Management, Location-based restrictions), token issuance and redemption, and user properties. --|Error class | Cause/error string| How to handle | -|--||-| -|`MsalUiRequiredException`| <ul><li>`INVALID_GRANT`: The refresh token used to redeem access token is invalid, expired, or revoked. This exception may be because of a password change. </li><li>`NO_TOKENS_FOUND`: Access token doesn't exist and no refresh token can be found to redeem access token.</li> <li>Step-up required<ul><li>MFA</li><li>Missing claims</li></ul></li><li>Blocked by Conditional Access (for example, [authentication broker](./msal-android-single-sign-on.md) installation required)</li><li>`NO_ACCOUNT_FOUND`: No account available in the cache for silent authentication.</li></ul> |Call `acquireToken()` to prompt the user to enter their username and password, and possibly consent and perform multi factor authentication.| -|`MsalDeclinedScopeException`|<ul><li>`DECLINED_SCOPE`: User or server has not accepted all scopes. The server may decline a scope if the requested scope is not supported, not recognized, or not supported for a particular account. </li></ul>| The developer should decide whether to continue authentication with the granted scopes or end the authentication process. Option to resubmit the acquire token request only for the granted scopes and provide hints for which permissions have been granted by passing `silentParametersForGrantedScopes` and calling `acquireTokenSilent`. | -|`MsalServiceException`|<ul><li>`INVALID_REQUEST`: This request is missing a required parameter, includes an invalid parameter, includes a parameter more than once, or is otherwise malformed. </li><li>`SERVICE_NOT_AVAILABLE`: Represents 500/503/506 error codes due to the service being down. </li><li>`UNAUTHORIZED_REQUEST`: The client is not authorized to request an authorization code.</li><li>`ACCESS_DENIED`: The resource owner or authorization server denied the request.</li><li>`INVALID_INSTANCE`: `AuthorityMetadata` validation failed</li><li>`UNKNOWN_ERROR`: Request to server failed, but no error and `error_description` are returned back from the service.</li><ul>| This exception class represents errors when communicating to the service, can be from the authorize or token endpoints. MSAL reads the error and error_description from the server response. Generally, these errors are resolved by fixing app configurations either in code or in the app registration portal. Rarely a service outage can trigger this warning, which can only be mitigated by waiting for the service to recover. | -|`MsalClientException`|<ul><li> `MULTIPLE_MATCHING_TOKENS_DETECTED`: There are multiple cache entries found and the sdk cannot identify the correct access or refresh token from the cache. This exception usually indicates a bug in the sdk for storing tokens or that the authority is not provided in the silent request and multiple matching tokens are found. </li><li>`DEVICE_NETWORK_NOT_AVAILABLE`: No active network is available on the device. </li><li>`JSON_PARSE_FAILURE`: The sdk failed to parse the JSON format.</li><li>`IO_ERROR`: `IOException` happened, could be a device or network error. </li><li>`MALFORMED_URL`: The URL is malformed. Likely caused when constructing the auth request, authority, or redirect URI. </li><li>`UNSUPPORTED_ENCODING`: The encoding is not supported by the device. </li><li>`NO_SUCH_ALGORITHM`: The algorithm used to generate [PKCE](https://tools.ietf.org/html/rfc7636) challenge is not supported. </li><li>`INVALID_JWT`: `JWT` returned by the server is not valid or is empty or malformed. </li><li>`STATE_MISMATCH`: State from authorization response did not match the state in the authorization request. For authorization requests, the sdk will verify the state returned from redirect and the one sent in the request. </li><li>`UNSUPPORTED_URL`: Unsupported URL, cannot perform ADFS authority validation. </li><li> `AUTHORITY_VALIDATION_NOT_SUPPORTED`: The authority is not supported for authority validation. The sdk supports B2C authorities, but doesn't support B2C authority validation. Only well-known host will be supported. </li><li>`CHROME_NOT_INSTALLED`: Chrome is not installed on the device. The sdk uses chrome custom tab for authorization requests if available, and will fall back to chrome browser. </li><li>`USER_MISMATCH`: The user provided in the acquire token request doesn't match the user returned from server.</li></ul>|This exception class represents general errors that are local to the library. These exceptions can be handled by correcting the request.| -|`MsalUserCancelException`|<ul><li>`USER_CANCELED`: The user initiated interactive flow and prior to receiving tokens back they canceled the request. </li></ul>|| -|`MsalArgumentException`|<ul><li>`ILLEGAL_ARGUMENT_ERROR_CODE`</li><li>`AUTHORITY_REQUIRED_FOR_SILENT`: Authority must be specified for `acquireTokenSilent`.</li></ul>|These errors can be mitigated by the developer correcting arguments and ensuring activity for interactive auth, completion callback, scopes, and an account with a valid ID have been provided.| -+| Error class | Cause/error string | How to handle | +| - | - | - | +| `MsalUiRequiredException` | <ul><li>`INVALID_GRANT`: The refresh token used to redeem access token is invalid, expired, or revoked. This exception may be because of a password change. </li><li>`NO_TOKENS_FOUND`: Access token doesn't exist and no refresh token can be found to redeem access token.</li> <li>Step-up required<ul><li>MFA</li><li>Missing claims</li></ul></li><li>Blocked by Conditional Access (for example, [authentication broker](./msal-android-single-sign-on.md) installation required)</li><li>`NO_ACCOUNT_FOUND`: No account available in the cache for silent authentication.</li></ul> | Call `acquireToken()` to prompt the user to enter their username and password, and possibly consent and perform multi factor authentication. | +| `MsalDeclinedScopeException` | <ul><li>`DECLINED_SCOPE`: User or server hasn't accepted all scopes. The server may decline a scope if the requested scope isn't supported, not recognized, or not supported for a particular account. </li></ul> | The developer should decide whether to continue authentication with the granted scopes or end the authentication process. Option to resubmit the acquire token request only for the granted scopes and provide hints for which permissions have been granted by passing `silentParametersForGrantedScopes` and calling `acquireTokenSilent`. | +| `MsalServiceException` | <ul><li>`INVALID_REQUEST`: This request is missing a required parameter, includes an invalid parameter, includes a parameter more than once, or is otherwise malformed. </li><li>`SERVICE_NOT_AVAILABLE`: Represents 500/503/506 error codes due to the service being down. </li><li>`UNAUTHORIZED_REQUEST`: The client isn't authorized to request an authorization code.</li><li>`ACCESS_DENIED`: The resource owner or authorization server denied the request.</li><li>`INVALID_INSTANCE`: `AuthorityMetadata` validation failed</li><li>`UNKNOWN_ERROR`: Request to server failed, but no error and `error_description` are returned back from the service.</li><ul> | This exception class represents errors when communicating to the service, can be from the authorize or token endpoints. MSAL reads the error and error_description from the server response. Generally, these errors are resolved by fixing app configurations either in code or in the app registration portal. Rarely a service outage can trigger this warning, which can only be mitigated by waiting for the service to recover. | +| `MsalClientException` | <ul><li> `MULTIPLE_MATCHING_TOKENS_DETECTED`: There are multiple cache entries found and the sdk can't identify the correct access or refresh token from the cache. This exception usually indicates a bug in the sdk for storing tokens or that the authority isn't provided in the silent request and multiple matching tokens are found. </li><li>`DEVICE_NETWORK_NOT_AVAILABLE`: No active network is available on the device. </li><li>`JSON_PARSE_FAILURE`: The sdk failed to parse the JSON format.</li><li>`IO_ERROR`: `IOException` happened, could be a device or network error. </li><li>`MALFORMED_URL`: The URL is malformed. Likely caused when constructing the auth request, authority, or redirect URI. </li><li>`UNSUPPORTED_ENCODING`: The encoding isn't supported by the device. </li><li>`NO_SUCH_ALGORITHM`: The algorithm used to generate [PKCE](https://tools.ietf.org/html/rfc7636) challenge isn't supported. </li><li>`INVALID_JWT`: `JWT` returned by the server isn't valid or is empty or malformed. </li><li>`STATE_MISMATCH`: State from authorization response didn't match the state in the authorization request. For authorization requests, the sdk will verify the state returned from redirect and the one sent in the request. </li><li>`UNSUPPORTED_URL`: Unsupported URL, can't perform ADFS authority validation. </li><li> `AUTHORITY_VALIDATION_NOT_SUPPORTED`: The authority isn't supported for authority validation. The sdk supports B2C authorities, but doesn't support B2C authority validation. Only well-known host will be supported. </li><li>`CHROME_NOT_INSTALLED`: Chrome isn't installed on the device. The sdk uses chrome custom tab for authorization requests if available, and will fall back to chrome browser. </li><li>`USER_MISMATCH`: The user provided in the acquire token request doesn't match the user returned from server.</li></ul> | This exception class represents general errors that are local to the library. These exceptions can be handled by correcting the request. | +| `MsalUserCancelException` | <ul><li>`USER_CANCELED`: The user initiated interactive flow and prior to receiving tokens back they canceled the request. </li></ul> | | +| `MsalArgumentException` | <ul><li>`ILLEGAL_ARGUMENT_ERROR_CODE`</li><li>`AUTHORITY_REQUIRED_FOR_SILENT`: Authority must be specified for `acquireTokenSilent`.</li></ul> | These errors can be mitigated by the developer correcting arguments and ensuring activity for interactive auth, completion callback, scopes, and an account with a valid ID have been provided. | ## Catching errors |
active-directory | Msal Net Token Cache Serialization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md | After Microsoft Authentication Library (MSAL) [acquires a token](msal-acquire-ca The recommendation is: - When you're writing a desktop application, use the cross-platform token cache as explained in [Desktop apps](msal-net-token-cache-serialization.md?tabs=desktop).-- Do nothing for [mobile and UWP apps](msal-net-token-cache-serialization.md?tabs=mobile). MSAL.NET provides secure storage for the cache.+- Do nothing for [mobile and Universal Windows Platform (UWP) apps](msal-net-token-cache-serialization.md?tabs=mobile). MSAL.NET provides secure storage for the cache. - In ASP.NET Core [web apps](scenario-web-app-call-api-overview.md) and [web APIs](scenario-web-api-call-api-overview.md), use [Microsoft.Identity.Web](microsoft-identity-web.md) as a higher-level API. You'll get token caches and much more. See [ASP.NET Core web apps and web APIs](msal-net-token-cache-serialization.md?tabs=aspnetcore). - In the other cases of [web apps](scenario-web-app-call-api-overview.md) and [web APIs](scenario-web-api-call-api-overview.md): - If you request tokens for users in a production application, use a [distributed token cache](msal-net-token-cache-serialization.md?tabs=aspnet#distributed-caches) (Redis, SQL Server, Azure Cosmos DB, distributed memory). Use token cache serializers available from [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache/). You can specify that you don't want to have any token cache serialization and in `WithCacheOptions(CacheOptions.EnableSharedCacheOptions)` makes the internal MSAL token cache shared between MSAL client application instances. Sharing a token cache is faster than using any token cache serialization, but the internal in-memory token cache doesn't have eviction policies. Existing tokens will be refreshed in place, but fetching tokens for different users, tenants, and resources makes the cache grow accordingly. -If you use this approach and have a large number of users or tenants, be sure to monitor the memory footprint. If the memory footprint becomes a problem, consider enabling token cache serialization, which might reduce the internal cache size. Also be aware that currently, you can't use shared cache and cache serialization together. +If you use this approach and have a large number of users or tenants, be sure to monitor the memory footprint. If the memory footprint becomes a problem, consider enabling token cache serialization, which might reduce the internal cache size. Currently, you can't use shared cache and cache serialization together. #### In-memory token cache |
active-directory | V2 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md | Choose your preferred [application scenario](authentication-flows-app-scenarios. As you work with the Microsoft identity platform to integrate authentication and authorization in your apps, you can refer to this image that outlines the most common app scenarios and their identity components. Select the image to view it full-size. -[](./media/v2-overview/application-scenarios-identity-platform.svg#lightbox) +[](./media/v2-overview/application-scenarios-identity-platform.png#lightbox) ## Learn authentication concepts |
active-directory | V2 Protocols Oidc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md | The following table describes error codes that can be returned in the `error` pa Receiving an ID token in your app might not always be sufficient to fully authenticate the user. You might also need to validate the ID token's signature and verify its claims per your app's requirements. Like all OpenID providers, the Microsoft identity platform's ID tokens are [JSON Web Tokens (JWTs)](https://tools.ietf.org/html/rfc7519) signed by using public key cryptography. -Web apps and web APIs that use ID tokens for authorization must validate them because such applications gate access to data. Other types of application might not benefit from ID token validation, however. Native and single-page apps (SPAs), for example, rarely benefit from ID token validation because any entity with physical access to the device or browser can potentially bypass the validation. +Web apps and web APIs that use ID tokens for authorization must validate them because such applications get access to data. Other types of application might not benefit from ID token validation, however. Native and single-page apps (SPAs), for example, rarely benefit from ID token validation because any entity with physical access to the device or browser can potentially bypass the validation. Two examples of token validation bypass are: |
active-directory | 6 Secure Access Entitlement Managment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md | Title: Manage external access with Azure Active Directory Entitlement Management -description: How to use Azure Active Directory Entitlement Management as a part of your overall external access security plan. + Title: Manage external access with Azure Active Directory entitlement management +description: How to use Azure AD Entitlement Management as a part of your overall external access security plan. -# Manage external access with Entitlement Management +# Manage external access with Azure Active Directory entitlement management +Use the entitlement management feature to manage the identity and access lifecycle. You can automate access request workflows, access assignments, reviews, and expiration. Delegated non-admins use entitlement management to create access packages that external users, from other organizations, can request access to. One and multi-stage approval workflows are configurable to evaluate requests, and provision users for time-limited access with recurring reviews. Use entitlement management for policy-based provisioning and deprovisioning of external accounts. -[Entitlement management](../governance/entitlement-management-overview.md) is an identity governance capability that enables organizations to manage identity and access lifecycle at scale by automating access request workflows, access assignments, reviews, and expiration. Entitlement management allows delegated non-admins to create [access packages](../governance/entitlement-management-overview.md) that external users from other organizations can request access to. One and multi-stage approval workflows can be configured to evaluate requests, and [provision](../governance/what-is-provisioning.md) users for time-limited access with recurring reviews. Entitlement management enables policy-based provisioning and deprovisioning of external accounts. +Learn more: -## Key concepts for enabling Entitlement Management +* [What is entitlement management?](../governance/entitlement-management-overview.md) +* [What are access packages and what resources can I manage with them?](../governance/entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them) +* [What is provisioning?](../governance/what-is-provisioning.md) -The following key concepts are important to understand for entitlement management. --### Access Packages +## Enable entitlement management -An [access package](../governance/entitlement-management-overview.md) is the foundation of entitlement management. Access packages are groupings of policy-governed resources a user needs to collaborate on a project or do other tasks. For example, an access package might include: --* access to specific SharePoint sites. +The following key concepts are important to understand for entitlement management. -* enterprise applications including your custom in-house and SaaS apps like Salesforce. +### Access packages -* Microsoft Teams. +An access package is the foundation of entitlement management: groupings of policy-governed resources for users to collaborate on a project or do other tasks. For example, an access package might include: -* Microsoft 365 Groups. +* Access to SharePoint sites +* Enterprise applications, including your custom in-house and SaaS apps, like Salesforce +* Microsoft Teams +* Microsoft 365 Groups ### Catalogs -Access packages reside in [catalogs](../governance/entitlement-management-catalog-create.md). You create a catalog when you want to group related resources and access packages and delegate the ability to manage them. First you add resources to a catalog, and then you can add those resources to access packages. For example, you might want to create a ΓÇ£FinanceΓÇ¥ catalog, and [delegate its management](../governance/entitlement-management-delegate.md) to a member of the finance team. That person can then [add resources](../governance/entitlement-management-catalog-create.md), create access packages, and manage access approval to those packages. +Access packages reside in catalogs. When you want to group related resources and access packages and delegate their management, you create a catalog. First, you add resources to a catalog, and then you can add resources to access packages. For example, you can create a finance catalog, and delegate its management to a member of the finance team. That person can add resources, create access packages, and manage access approval. -The following diagram shows a typical governance lifecycle for an external user gaining access to an access package that has an expiration. +Learn more: - +* [Create and manage a catalog of resources in entitlement management](../governance/entitlement-management-catalog-create.md) +* [Delegation and roles in entitlement management](../governance/entitlement-management-delegate.md) +* [Add resources to a catalog](../governance/entitlement-management-catalog-create.md#add-resources-to-a-catalog) -### Self-service external access +The following diagram shows a typical governance lifecycle of an external user gaining access to an access package, with an expiration. -You can surface access packages through the [Azure AD My Access Portal](../governance/entitlement-management-request-access.md) to enable external users to request access. Policies determine who can request an access package. You specify who is allowed to request the access package: +  -* Specific [connected organizations](../governance/entitlement-management-organization.md) +### Self-service external access -* All configured connected organizations +You can make access packages available, through the Azure AD My Access portal, to enable external users to request access. Policies determine who can request an access package. See, [Request access to an access package in entitlement management](../governance/entitlement-management-request-access.md). -* All users from any organization +You specify who is allowed to request the access package: -* Member or guest users already in your tenant +* Connected organizations + * See, [Add a connected organization in entitlement management](../governance/entitlement-management-organization.md) +* Configured connected organizations +* Users from organizations +* Member or guest users in your tenant ### Approvals -ΓÇÄAccess packages can include mandatory approval for access. **Always implement approval processes for external users**. Approvals can be a single or multi-stage approval. Approvals are determined by policies. If both internal and external users need to access the same package, you'll likely set up different access policies for different categories of connected organizations, and for internal users. -### Expiration -ΓÇÄAccess packages can include an expiration date. Expiration can be set to a specific day or give the user a specific number of days for access. When the access package expires, and the user has no other access, the B2B guest user object representing the user can be deleted or blocked from signing in. We recommend that you enforce expiration on access packages for external users. Not all access packages have expirations. For those that don't, ensure that you perform access reviews. --### Access reviews --Access packages can require periodic [access reviews](../governance/manage-guest-access-with-access-reviews.md), which require the package owner or a designee to attest to the continued need for usersΓÇÖ access. --Before you set up your review, determine the following. --* Who -- * What are the criteria for continued access? -- * Who are the specified reviewers? --* How often should scheduled reviews occur? -- * Built in options include monthly, quarterly, bi-annually or annually. -- * We recommend quarterly or more frequently for packages that support external access. -- +Access packages can include mandatory approval for access. Approvals can be single or multi-stage and are determined by policies. If internal and external users need to access the same package, you can set up access policies for categories of connected organizations, and for internal users. > [!IMPORTANT]-> Access reviews of access packages only review access granted through Entitlement Management. You must therefore set up other processes to review any access provided to external users outside of Entitlement Management. +> Implement approval processes for external users. -For more information about access reviews, see [Planning an Azure AD Access Reviews deployment](../governance/deploy-access-reviews.md). +### Expiration -## Using automation in Entitlement Management +Access packages can include an expiration date or a number of days you set for access. When the access package expires, and access ends, the B2B guest user object representing the user can be deleted or blocked from signing in. We recommend you enforce expiration on access packages for external users. Not all access packages have expirations. -You can perform [Entitlement Management functions by using Microsoft Graph](/graph/tutorial-access-package-api), including +> [!IMPORTANT] +> For packages without expiration, perform regular access reviews. -* [Manage access packages](/graph/api/resources/accesspackage) +### Access reviews -* [Manage access reviews](/graph/api/resources/accessreviewsv2-overview) +Access packages can require periodic access reviews, which require the package owner or a designee to attest to the continued need for usersΓÇÖ access. See, [Manage guest access with access reviews](../governance/manage-guest-access-with-access-reviews.md). -* [Manage connected organizations](/graph/api/resources/connectedorganization) +Before you set up your review, determine the following criteria: -* [Manage Entitlement Management settings](/graph/api/resources/entitlementmanagementsettings) +* Who + * Criteria for continued access + * Reviewers +* How often + * Built-in options are monthly, quarterly, bi-annually, or annually + * We recommend quarterly, or more frequent, reviews for packages that support external access -## Recommendations +> [!IMPORTANT] +> Access package reviews examine access granted through entitlement management. Set up other processes to review access to external users, outside entitlement management. -We recommend the practices to govern external access with Entitlement Management. +Learn more: [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md). -**For projects with one or more business partners, [Create and use access packages](../governance/entitlement-management-access-package-create.md) to onboard and provision those partnerΓÇÖs users access to resources**. +## Using entitlement management automation -* If you already have B2B users in your directory, you can also directly assign them to the appropriate access packages. +* [Working with the Azure AD entitlement management API](/graph/api/resources/entitlementmanagement-overview?view=graph-rest-1.0&preserve-view=true ) +* [accessPackage resource type](/graph/api/resources/accesspackage?view=graph-rest-1.0&preserve-view=true ) +* [Azure AD access reviews](/graph/api/resources/accessreviewsv2-overview?view=graph-rest-1.0&preserve-view=true ) +* [connectedOrganization resource type](/graph/api/resources/connectedorganization?view=graph-rest-1.0&preserve-view=true ) +* [entitlementManagementSettings resource type](/graph/api/resources/entitlementmanagementsettings?view=graph-rest-1.0&preserve-view=true ) -* You can assign access in the [Azure portal](../governance/entitlement-management-access-package-assignments.md), or via [Microsoft Graph](/graph/api/resources/accesspackageassignmentrequest). +## External access governance recommendations -**Use your Identity Governance settings to remove users from your directory when their access packages expire**. +### Best practices - +We recommend the following practices to govern external access with entitlement management. -These settings only apply to users who were onboarded through Entitlement Management. +* For projects with one or more business partners, create and use access packages to onboard and provide access to resources. + * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) +* If you have B2B users in your directory, you can assign them to access packages. +* You can assign access in the Azure portal or with Microsoft Graph + * [View, add, and remove assignments for an access package in entitlement management](../governance/entitlement-management-access-package-assignments.md) + * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) -**[Delegate management of catalogs and access packages](../governance/entitlement-management-delegate.md) to business owners, who have more information on who should access**. +### Identity Governance - Settings - +Use **Identity Governance - Settings** to remove users from your directory when their access packages expire. The following settings apply to users onboarded with entitlement management. -**ΓÇÄ[Enforce expiration of access packages](../governance/entitlement-management-access-package-lifecycle-policy.md) to which external users have access.** +  +### Delegate catalog and package management - +You can delegate catalog and package management to business owners, who have more information on who should access. See, [Delegation and roles in entitlement managements](../governance/entitlement-management-delegate.md) -* If you know the end date of a project-based access package, use the On Date to set the specific date. +  -* Otherwise we recommend the expiration be no longer 365 days, unless it is known to be a multi-year engagement. +### Enforce access package expiration -* Allow users to extend access. +You can enforce access expiration for external users. See, [Change lifecycle settings for an access package in entitlement management](../governance/entitlement-management-access-package-lifecycle-policy.md). -* Require approval to grant the extension. +  -**[Enforce access reviews of packages](../governance/manage-guest-access-with-access-reviews.md) to avoid inappropriate access for guests.** +* For the end date of a project-based access package, use **On date** to set the date. + * Otherwise we recommend expiration to be no longer 365 days, unless it's a multi-year project +* Allow users to extend access + * Require approval to grant the extension - +### Enforce guest-access package reviews -* Enforce reviews quarterly. +You can enforce reviews of guest-access packages to avoid inappropriate access for guests. See, [Manage guest access with access reviews](../governance/manage-guest-access-with-access-reviews.md). -* For compliance-sensitive projects, set the reviewers to be specific reviewers, rather than self-review for external users. The users who are access package managers are a good place to start for reviewers. +  -* For less sensitive projects, having the users self-review will reduce the burden on the organization to remove access from users who are no longer with their home organization. +* Enforce quarterly reviews +* For compliance-related projects, set the reviewers to be reviewers, rather than self-review for external users. + * You can use access package managers as reviewers +* For less sensitive projects, users self-reviewing reduces the burden to remove access from users no longer with the organization. -For more information, see [Govern access for external users in Azure AD Entitlement Management](../governance/entitlement-management-external-users.md) +Learn more: [Govern access for external users in entitlement management](../governance/entitlement-management-external-users.md) ### Next steps -See the following articles on securing external access to resources. We recommend you take the actions in the listed order. +See the following articles to learn more about securing external access to resources. We recommend you follow the listed order. 1. [Determine your security posture for external access](1-secure-access-posture.md) -2. [Discover your current state](2-secure-access-current-state.md) +2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) -3. [Create a governance plan](3-secure-access-plan.md) +3. [Create a security plan for external access](3-secure-access-plan.md) -4. [Use groups for security](4-secure-access-groups.md) +4. [Securing external access with groups](4-secure-access-groups.md) -5. [Transition to Azure AD B2B](5-secure-access-b2b.md) +5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) -6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md) (You are here.) +6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) (You're here) -7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) +7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md) -8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) +8. [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md) -9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md) +9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) |
active-directory | Azure Ad Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md | For more information on data residency in Microsoft Cloud offerings, see the fol * [Microsoft 365 data locations - Microsoft 365 Enterprise](/microsoft-365/enterprise/o365-data-locations?view=o365-worldwide&preserve-view=true) * [Microsoft Privacy - Where is Your Data Located?](https://www.microsoft.com/trust-center/privacy/data-location?rtc=1) * Download PDF: [Privacy considerations in the cloud](https://go.microsoft.com/fwlink/p/?LinkID=2051117&clcid=0x409&culture=en-us&country=US)++## Next steps ++* [Azure Active Directory and data residency](azure-ad-data-residency.md) (You're here) ++* [Data operational considerations](data-operational-considerations.md) +* [Data protection considerations](data-protection-considerations.md) + |
active-directory | Data Operational Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-operational-considerations.md | To roll out changes to the service across data centers, the Azure AD team define ## Resources -* [Azure AD and data residency](azure-ad-data-residency.md) * [Microsoft Service Trust Documents](https://servicetrust.microsoft.com/Documents/TrustDocuments) * [Microsoft Azure Trusted Cloud](https://azure.microsoft.com/explore/trusted-cloud/) * [Office 365 data centers](https://social.technet.microsoft.com/wiki/contents/articles/37502.office-365-how-to-change-data-center-regions.aspx#Moving_Office_365_Data_Centers)++## Next steps ++* [Azure Active Directory and data residency](azure-ad-data-residency.md) ++* [Data operational considerations](data-operational-considerations.md) (You're here) +* [Data protection considerations](data-protection-considerations.md) |
active-directory | Data Protection Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-protection-considerations.md | For more information about Secret encryption at rest, see the following table. |Microsoft Authenticator app: Backup and restore of enterprise account metadata |AES-256 | ## Resources-* [Azure AD and data residency](azure-ad-data-residency.md) + * [Microsoft Service Trust Documents](https://servicetrust.microsoft.com/Documents/TrustDocuments) * [Microsoft Azure Trust Center](https://azure.microsoft.com/overview/trusted-cloud/) * [Where is my data? - Office 365 documentation](http://o365datacentermap.azurewebsites.net/) * [Recover from deletions in Azure Active Directory](recover-from-deletions.md)++## Next steps ++* [Azure Active Directory and data residency](azure-ad-data-residency.md) ++* [Data operational considerations](data-operational-considerations.md) +* [Data protection considerations](data-protection-considerations.md) (You're here) |
active-directory | How To Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md | The sign-in experience process is grouped into sections. At the end of each sect - Choose one of two **Templates**: Full-screen or partial-screen background. The full-screen background could obscure your background image, so choose the partial-screen background if your background image is important. - The details of the **Header** and **Footer** options are set on the next two sections of the process. -- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page. [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template.css).+- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page. [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css). ## Header |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | Azure AD receives improvements on an ongoing basis. To stay up to date with the This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md). +## January 2023 ++### Public Preview - Cross-tenant synchronization ++++**Type:** New feature +**Service category:** Provisioning +**Product capability:** Collaboration ++Cross-tenant synchronization allows you to set up a scalable and automated solution for users to access applications across tenants in your organization. It builds upon the Azure AD B2B functionality and automates creating, updating, and deleting B2B users. For more information, see: [What is cross-tenant synchronization? (preview)](../multi-tenant-organizations/cross-tenant-synchronization-overview.md). +++++### Public Preview - Devices Blade Self-Help Capability for Pending Devices ++++**Type:** New feature +**Service category:** Device Access Management +**Product capability:** End User Experiences ++In the **All Devices** blade under the registered column, you can now select any pending devices you have, and it will open a context pane to help troubleshoot why the device may be pending. You can also offer feedback on if the summarized information is helpful or not. For more information, see: [Pending devices in Azure Active Directory](/troubleshoot/azure/active-directory/pending-devices). +++++### General Availability - Apple Watch companion app removed from Authenticator for iOS ++++**Type:** Deprecated +**Service category:** Identity Protection +**Product capability:** Identity Security & Protection ++In the January 2023 release of Authenticator for iOS, there will be no companion app for watchOS due to it being incompatible with Authenticator security features. This means you won't be able to install or use Authenticator on Apple Watch. This change only impacts Apple Watch, so you'll still be able to use Authenticator on your other devices. For more information, see: [Common questions about the Microsoft Authenticator app](https://support.microsoft.com/account-billing/common-questions-about-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd). +++++### General Availability - New Federated Apps available in Azure AD Application gallery - January 2023 ++++**Type:** New feature +**Service category:** Enterprise Apps +**Product capability:** 3rd Party Integration ++In January 2023 we've added the following 10 new applications in our App gallery with Federation support: ++[MINT TMS](../saas-apps/mint-tms-tutorial.md), [Exterro Legal GRC Software Platform](../saas-apps/exterro-legal-grc-software-platform-tutorial.md), [SIX.ONE Identity Access Manager](https://portal.six.one/), [Lusha](../saas-apps/lusha-tutorial.md), [Descartes](../saas-apps/descartes-tutorial.md), [Travel Management System](https://tms.billetkontoret.dk/), [Pinpoint (SAML)](../saas-apps/pinpoint-tutorial.md), [my.sdworx.com](../saas-apps/mysdworxcom-tutorial.md), [itopia Labs](https://labs.itopia.com/), [Better Stack](https://betteruptime.com/users/sign-up). ++You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. ++For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest +++++### Public Preview - New provisioning connectors in the Azure AD Application Gallery - January 2023 ++++**Type:** New feature +**Service category:** App Provisioning +**Product capability:** 3rd Party Integration ++We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps: ++- [SurveyMonkey Enterprise](../saas-apps/surveymonkey-enterprise-provisioning-tutorial.md) +++For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). +++++### Public Preview - Azure AD cloud sync new user experience +++**Type:** Changed feature +**Service category:** Azure AD Connect Cloud Sync +**Product capability:** Identity Governance ++Try out the new guided experience for syncing objects from AD to Azure AD using Azure AD Cloud Sync in Azure Portal. With this new experience, Hybrid Identity Administrators can easily determine which sync engine to use for their scenarios and learn more about the various options they have with our sync solutions. With a rich set of tutorials and videos, customers will be able to learn everything about Azure AD cloud sync in one single place. ++This experience will also help administrators walk through the different steps involved in setting up a cloud sync configuration as well as an intuitive experience to help them easily manage it. Admins can also get insights into their sync configuration by using the "Insights" option which is integrated with Azure Monitor and Workbooks. ++For more information:, see: ++- [Create a new configuration for Azure AD Connect cloud sync](../cloud-sync/how-to-configure.md) +- [Attribute mapping in Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) +- [Azure AD cloud sync insights workbook](/azure/active-directory/cloud-sync/how-to-cloud-sync-workbook) ++++### Public Preview - Support for Directory Extensions using Azure AD cloud sync ++++**Type:** New feature +**Service category:** Provisioning +**Product capability:** AAD Connect Cloud Sync ++Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to simply map the needed attributes using Cloud Sync's attribute mapping experience. ++For more details on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](/azure/active-directory/cloud-sync/custom-attribute-mapping) ++++ ## December 2022 ### Public Preview - Windows 10+ Troubleshooter for Diagnostic Logs |
active-directory | Using Multi Stage Reviews | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/using-multi-stage-reviews.md | Title: Using multi-stage reviews to meet your attestation and certification needs - Azure AD -description: Learn how to use multi-stage reviews to design more efficient reviews in Azure Active Directory. + Title: Using multi-stage reviews to meet your attestation and certification needs - Microsoft Entra +description: Learn how to use multi-stage reviews to design more efficient reviews with Microsoft Entra. -# Using multi-stage reviews to meet your attestation and certification needs in Azure AD +# Using multi-stage reviews to meet your attestation and certification needs with Microsoft Entra -Azure AD Access Reviews support up to three review stages, in which multiple types of reviewers engage in determining who still needs access to company resources. These reviews could be for membership in groups or teams, access to applications, assignments to privileged roles, or access package assignments. When review administrators configure the review for automatic application of decisions, at the end of the review period, access is revoked for denied users. +Microsoft Entra Access Reviews support up to three review stages, in which multiple types of reviewers engage in determining who still needs access to company resources. These reviews could be for membership in groups or teams, access to applications, assignments to privileged roles, or access package assignments. When review administrators configure the review for automatic application of decisions, at the end of the review period, access is revoked for denied users. ## Use cases for multi-stage reviews |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md | The following table shows the minimum requirements for the Azure AD Connect sync | Number of objects in Active Directory | CPU | Memory | Hard drive size | | | | | |-| Fewer than 10,000 |1.6 GHz |4 GB |70 GB | -| 10,000ΓÇô50,000 |1.6 GHz |4 GB |70 GB | +| Fewer than 10,000 |1.6 GHz |6 GB |70 GB | +| 10,000ΓÇô50,000 |1.6 GHz |6 GB |70 GB | | 50,000ΓÇô100,000 |1.6 GHz |16 GB |100 GB |-| For 100,000 or more objects, the full version of SQL Server is required. For performance reasons, installing locally is preferred. | | | | +| For 100,000 or more objects, the full version of SQL Server is required. For performance reasons, installing locally is preferred. The following values are valid only for Azure AD Connect installation. If SQL Server will be installed on the same server, further memory, drive, and CPU is required. | | | | | 100,000ΓÇô300,000 |1.6 GHz |32 GB |300 GB | | 300,000ΓÇô600,000 |1.6 GHz |32 GB |450 GB | | More than 600,000 |1.6 GHz |32 GB |500 GB | |
active-directory | Cross Tenant Synchronization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md | These steps describe how to use Microsoft Graph Explorer (recommended), but you Content-Type: application/json {- "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a", + "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a" } ``` These steps describe how to use Microsoft Graph Explorer (recommended), but you } ``` -1. Use the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API to enable user synchronization in the target tenant. +1. Use the [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true) API to enable user synchronization in the target tenant. **Request** ```http- PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/3d0f5dec-5d3d-455c-8016-e2af1ae4d31a/identitySynchronization + PUT https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/3d0f5dec-5d3d-455c-8016-e2af1ae4d31a/identitySynchronization Content-type: application/json { |
active-directory | Administrative Units | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md | The following sections describe current support for administrative unit scenario | Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: | | Administrative unit-scoped creation and deletion of groups | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |-| Administrative unit-scoped management of group properties and membership | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | +| Administrative unit-scoped management of group properties and membership for Microsoft 365 groups | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | +| Administrative unit-scoped management of group properties and membership for all other groups | :heavy_check_mark: | :heavy_check_mark: | :x: | | Administrative unit-scoped management of group licensing | :heavy_check_mark: | :heavy_check_mark: | :x: | ### Device management |
active-directory | Custom Consent Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-consent-permissions.md | To delegate the creation, update and deletion of [app consent policies](../manag > | - | -- | > | microsoft.directory/servicePrincipals/managePermissionGrantsForSelf.{id} | Grants the ability to consent to apps on behalf of self (user consent), subject to app consent policy `{id}`. | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.{id} | Grants the permission to consent to apps on behalf of all (tenant-wide admin consent), subject to app consent policy `{id}`. |-> | microsoft.directory/permissionGrantPolicies/standard/read | Grants the ability to read app consent policies. | -> | microsoft.directory/permissionGrantPolicies/basic/update | Grants the ability to update basic properties on existing app consent policies. | -> | microsoft.directory/permissionGrantPolicies/create | Grants the ability to create app consent policies. | -> | microsoft.directory/permissionGrantPolicies/delete | Grants the ability to delete app consent policies. | +> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | +> | microsoft.directory/permissionGrantPolicies/basic/update | Update basic properties of permission grant policies | +> | microsoft.directory/permissionGrantPolicies/create | Create permission grant policies | +> | microsoft.directory/permissionGrantPolicies/delete | Delete permission grant policies | ## Next steps |
active-directory | Custom Device Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-device-permissions.md | The following permission is available to update tenant-wide device registration > [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |-> | microsoft.directory/devices/createdFrom/read | Read createdfrom properties of devices | +> | microsoft.directory/devices/createdFrom/read | Read created from Internet of Things (IoT) device template links | > | microsoft.directory/devices/registeredOwners/read | Read registered owners of devices | > | microsoft.directory/devices/registeredUsers/read | Read registered users of devices | > | microsoft.directory/devices/standard/read | Read basic properties on devices | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |-> | microsoft.directory/bitlockerKeys/metadata/read | Read bitlocker metadata on devices | +> | microsoft.directory/bitlockerKeys/metadata/read | Read bitlocker key metadata on devices | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | #### Update |
active-directory | Custom Enterprise App Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md | To delegate create, read, update, and delete (CRUD) permissions for updating the > | microsoft.directory/applicationPolicies/owners/update | Update the owner property of application policies | > | microsoft.directory/applicationPolicies/policyAppliedTo/read | Read application policies applied to objects list | > | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies |-> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete servicePrincipals, and read and update all properties in Azure Active Directory | +> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties | > | microsoft.directory/servicePrincipals/allProperties/read | Read all properties (including privileged properties) on servicePrincipals | > | microsoft.directory/servicePrincipals/allProperties/update | Update all properties (including privileged properties) on servicePrincipals | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/read | Read service principal role assignments | To delegate create, read, update, and delete (CRUD) permissions for updating the > | microsoft.directory/connectorGroups/allProperties/update | Update all properties of application proxy connector groups | > | microsoft.directory/connectors/create | Create application proxy connectors | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors |-> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | -> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with the application object | -> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema | +> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | +> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | +> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | ## Next steps |
active-directory | Configure Cmmc Level 2 Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md | The following table provides a list of practice statement and objectives, and Az | AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) | | AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).| | AC.L2-3.1.11<br><br>**Practice statement:** Terminate (automatically) a user session after a defined condition.<br><br>**Objectives:**<br>Determine if:<br>[a.] conditions requiring a user session to terminate are defined; and<br>[b.] a user session is automatically terminated after any of the defined conditions occur. | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)-|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) | +|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) | | AC.L2-3.1.13<br><br>**Practice statement:** Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] cryptographic mechanisms to protect the confidentiality of remote access sessions are identified; and<br>[b.] cryptographic mechanisms to protect the confidentiality of remote access sessions are implemented. | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |-| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) | +| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) | | AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) | | AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) | | AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) | The following table provides a list of practice statement and objectives, and Az * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md) +* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md) |
active-directory | Configure Cmmc Level 2 Identification And Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md | The following table provides a list of practice statement and objectives, and Az | IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](../conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](../authentication/concept-authentication-methods.md) | | IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md) | | IA.L2-3.5.5<br><br>**Practice statement:** Prevent reuse of identifiers for a defined period.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period within which identifiers can't be reused is defined; and<br>[b.] reuse of identifiers is prevented within the defined period. | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |-| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) | +| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) | | IA.L2-3.5.7<br><br>**Practice statement:**<br><br>**Objectives:** Enforce a minimum password complexity and change of characters when new passwords are created.<br>Determine if:<br>[a.] password complexity requirements are defined;<br>[b.] password change of character requirements are defined;<br>[c.] minimum password complexity requirements as defined are enforced when new passwords are created; and<br>[d.] minimum password change of character requirements as defined are enforced when new passwords are created.<br><br>IA.L2-3.5.8<br><br>**Practice statement:** Prohibit password reuse for a specified number of generations.<br><br>**Objectives:**<br>Determine if:<br>[a.] the number of generations during which a password cannot be reused is specified; and<br>[b.] reuse of passwords is prohibited during the specified number of generations. | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) | | IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) | | IA.L2-3.5.10<br><br>**Practice statement:** Store and transmit only cryptographically protected passwords.<br><br>**Objectives:**<br>Determine if:<br>[a.] passwords are cryptographically protected in storage; and<br>[b.] passwords are cryptographically protected in transit. | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) | The following table provides a list of practice statement and objectives, and Az * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md) +* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md) |
active-directory | How To Use Quickstart Verifiedemployee | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-verifiedemployee.md | If you already have a test user, you can skip this section. If you want to creat 1. Find the new user, select to **view profile** and select **Edit**. Update the following attributes then select Save: - Job Title - Email (in the Contact Info section. Doesn't have to be an existing email address) - - Photo (select JPG/PNG file with low, thumbnail like, resolution) + - Photo (select JPG file with low, thumbnail like, resolution. Maximum size is 2MB.) 1. Open a new, private, browser window and navigate to page like [https://myapps.microsoft.com/](https://myapps.microsoft.com/) and sign in with your new user. The user name would be something like meganb@yourtenant.onmicrosoft.com. You'll be prompted to change your password ## Set up the user for Microsoft Authenticator All of the claims in the Verified employee credential come from attributes in th | `jobTitle` | `jobTitle` | The user's job title. This attribute doesn't have a value by default in the user's profile. If the user's profile has no value specified, there's no `jobTitle` claim in the issued VC. | | `preferredLanguage` | `preferredLanguage` | Should follow [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) and contain a value like `en-us`. There's no default value specified. If there's no value, no claim is included in the issued VC. | | `mail` | `mail` | The user's email address. The `mail` value isn't the same as the UPN. It's also an attribute that doesn't have a value by default. -| `photo` | `photo` | The uploaded photo for the user. The image type (JPEG, PNG, etc.), depends on the uploaded image type. When presenting the photo claim to a verifier, the photo claim is in the UrlEncode(Base64Encode(photo)) format. To use the photo, the verifier application has to Base64Decode(UrlDecode(photo)). +| `photo` | `photo` | The uploaded photo for the user. The image type should be JPEG and the maximum size is 2MB. When presenting the photo claim to a verifier, the photo claim is in the UrlEncode(Base64Encode(photo)) format. To use the photo, the verifier application has to Base64Decode(UrlDecode(photo)). See full Azure AD user profile [properties reference](/graph/api/resources/user). The configuration file depends on the sample in-use. - **python** - [config.json](https://github.com/Azure-Samples/active-directory-verifiable-credentials-python/blob/main/1-python-api-idtokenhint/config.json) - **Java** - values are set as environment variables in [run.cmd](https://github.com/Azure-Samples/active-directory-verifiable-credentials-jav/docker-run.sh when using docker. +## Remarks ++>[!NOTE] +> This schema is fixed and it is not supported to add or remove claims in the schema. The attestation flow for directory based claims is also fixed and it is unsupported to try and change it to become a custom credential with id token hint attestation flow, for example. + ## Next steps Learn [how to customize your verifiable credentials](credential-design.md). |
aks | Azure Blob Csi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md | To have a storage volume persist for your workload, you can use a StatefulSet. T [azure-csi-blob-storage-provision]: azure-csi-blob-storage-provision.md [azure-disk-csi-driver]: azure-disk-csi.md [azure-files-csi-driver]: azure-files-csi.md-[install-azure-cli]: /cli/azure/install_azure_cli +[install-azure-cli]: /cli/azure/install-azure-cli |
aks | Concepts Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md | For more information on core Kubernetes and AKS concepts, see the following arti [nginx-ingress]: ingress-basic.md [ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29. [nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md-[azure-cni-aks]: /configure-azure-cni.md -[kubenet-aks]: /configure-kubenet.md +[azure-cni-aks]: configure-azure-cni.md +[kubenet-aks]: configure-kubenet.md |
aks | Gpu Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md | For information on using Azure Kubernetes Service with Azure Machine Learning, s [azureml-deploy]: ../machine-learning/how-to-deploy-managed-online-endpoints.md [azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md [aks-container-insights]: monitor-aks.md#container-insights-[advanced-scheduler-aks]: /aks/operator-best-practices-advanced-scheduler.md +[advanced-scheduler-aks]: operator-best-practices-advanced-scheduler.md [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register-[az-feature-show]: /cli/azure/feature#az-feature-show +[az-feature-show]: /cli/azure/feature#az-feature-show |
aks | Internal Lb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md | internal-app LoadBalancer 10.0.248.59 10.240.0.7 80:30555/TCP 2m ## Specify an IP address -If you want to use a specific IP address with the internal load balancer, add the *loadBalancerIP* property to the load balancer YAML manifest. In this scenario, the specified IP address must reside in the same subnet as the AKS cluster, but it can't already be assigned to a resource. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster. --> [!NOTE] -> If you initially deploy the service without specifying an IP address and later you update its configuration to use a dynamically assigned IP address using the *loadBalancerIP* property, the IP address still shows as dynamically assigned. +When you specify an IP address for the load balancer, the specified IP address must reside in the same subnet as the AKS cluster, but it can't already be assigned to a resource. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster. For more information on subnets, see [Add a node pool with a unique subnet][unique-subnet]. -```yaml -apiVersion: v1 -kind: Service -metadata: - name: internal-app - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" -spec: - type: LoadBalancer - loadBalancerIP: 10.240.0.25 - ports: - - port: 80 - selector: - app: internal-app -``` +If you want to use a specific IP address with the load balancer, there are two ways: ++> [!IMPORTANT] +> Adding the *LoadBalancerIP* property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. ++* **Set service annotations**: Use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address. + + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: internal-app + annotations: + service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.240.0.25 + service.beta.kubernetes.io/azure-load-balancer-internal: "true" + spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: internal-app + ``` ++* **Add the *LoadBalancerIP* property to the load balancer YAML manifest**: Add the *Service.Spec.LoadBalancerIP* property to the load balancer YAML manifest. This field is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235), and it can't support dual-stack. Current usage remains the same and existing services are expected to work without modification. ++ ```yaml + apiVersion: v1 + kind: Service + metadata: + name: internal-app + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" + spec: + type: LoadBalancer + loadBalancerIP: 10.240.0.25 + ports: + - port: 80 + selector: + app: internal-app + ``` When you view the service details, the IP address in the *EXTERNAL-IP* column should reflect your specified IP address. |
aks | Limit Egress Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md | The following FQDN / application rules are optional but recommended for AKS clus |--||-| | **`security.ubuntu.com`, `azure.archive.ubuntu.com`, `changelogs.ubuntu.com`** | **`HTTP:80`** | This address lets the Linux cluster nodes download the required security patches and updates. | -If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). +If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). Keep in mind that Node Image Upgrades also come with updated packages including security fixes. ## GPU enabled AKS clusters The following FQDN / application rules are required for using Windows Server bas | **`onegetcdn.azureedge.net, go.microsoft.com`** | **`HTTPS:443`** | To install windows-related binaries | | **`*.mp.microsoft.com, www.msftconnecttest.com, ctldl.windowsupdate.com`** | **`HTTP:80`** | To install windows-related binaries | +If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). Keep in mind that Node Image Upgrades also come with updated packages including security fixes. ++ ## AKS addons and integrations ### Microsoft Defender for Containers |
aks | Load Balancer Standard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md | spec: app: public-app ``` +### Specify the load balancer IP address ++If you want to use a specific IP address with the load balancer, there are two ways: ++> [!IMPORTANT] +> Adding the *LoadBalancerIP* property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. ++* **Set service annotations**: Use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address. +* **Add the *LoadBalancerIP* property to the load balancer YAML manifest**: Add the *Service.Spec.LoadBalancerIP* property to the load balancer YAML manifest. This field is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235), and it can't support dual-stack. Current usage remains the same and existing services are expected to work without modification. ++### Deploy the service manifest + Deploy the public service manifest using [`kubectl apply`][kubectl-apply] and specify the name of your YAML manifest. ```azurecli-interactive |
aks | Operator Best Practices Advanced Scheduler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-advanced-scheduler.md | This article focused on advanced Kubernetes scheduler features. For more informa [aks-best-practices-identity]: operator-best-practices-identity.md [use-multiple-node-pools]: use-multiple-node-pools.md [taint-node-pool]: use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool-[use-gpus-aks]: /aks/gpu-cluster.md +[use-gpus-aks]: gpu-cluster.md |
api-management | Api Management Configuration Repository Git | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-configuration-repository-git.md | These files can be created, deleted, edited, and managed on your local file syst > * [Subscriptions](/rest/api/apimanagement/current-ga/subscription) > * Named values > * Developer portal entities other than styles and templates+> * Policy Fragments > ### Root api-management folder |
app-service | Configure Authentication Provider Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md | Title: Configure Azure AD authentication description: Learn how to configure Azure Active Directory authentication as an identity provider for your App Service or Azure Functions app. ms.assetid: 6ec6a46c-bce4-47aa-b8a3-e133baef22eb Previously updated : 10/26/2021 Last updated : 01/31/2023 # Configure your App Service or Azure Functions app to use Azure AD login +Select another authentication provider to jump to it. + [!INCLUDE [app-service-mobile-selector-authentication](../../includes/app-service-mobile-selector-authentication.md)] This article shows you how to configure authentication for Azure App Service or Azure Functions so that your app signs in users with the [Microsoft identity platform](../active-directory/develop/v2-overview.md) (Azure AD) as the authentication provider. The App Service Authentication feature can automatically create an app registrat ## <a name="express"> </a> Option 1: Create a new app registration automatically -This option is designed to make enabling authentication simple and requires just a few clicks. +Use this option unless you need to create an app registration separately. It makes enabling authentication simple and requires just a few clicks. You can customize the app registration in Azure AD once it's created. 1. Sign in to the [Azure portal] and navigate to your app. 1. Select **Authentication** in the menu on the left. Click **Add identity provider**. This option is designed to make enabling authentication simple and requires just These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to log in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow). -1. (Optional) Click **Next: Permissions** and add any scopes needed by the application. These will be added to the app registration, but you can also change them later. +1. (Optional) Click **Next: Permissions** and add any Microsoft Graph permissions needed by the application. These will be added to the app registration, but you can also change them later. 1. Click **Add**. You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. For an example of configuring Azure AD login for a web app that accesses Azure S ## <a name="advanced"> </a>Option 2: Use an existing registration created separately -You can also manually register your application for the Microsoft identity platform, customizing the registration and configuring App Service Authentication with the registration details. This is useful, for example, if you want to use an app registration from a different Azure AD tenant than the one your application is in. +You can configure App Service authentication to use an existing app registration. The following situations are the most common cases to use an existing app registration: ++- Your account doesn't have permissions to create app registrations in your Azure AD tenant. +- You want to use an app registration from a different Azure AD tenant than the one your app is in. +- The option to create a new registration is not available for government clouds. -### <a name="register"> </a>Create an app registration in Azure AD for your App Service app +#### <a name="register"> </a>Step 1: Create an app registration in Azure AD for your App Service app -First, you will create your app registration. As you do so, collect the following information which you will need later when you configure the authentication in the App Service app: +During creation of the app registration, collect the following information which you will need later when you configure the authentication in the App Service app: - Client ID - Tenant ID First, you will create your app registration. As you do so, collect the followin To register the app, perform the following steps: 1. Sign in to the [Azure portal], search for and select **App Services**, and then select your app. Note your app's **URL**. You'll use it to configure your Azure Active Directory app registration.-1. From the portal menu, select **Azure Active Directory**, then go to the **App registrations** tab and select **New registration**. +1. From the portal menu, select **Azure Active Directory**. +1. From the left navigation, select **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your app registration.-1. In **Redirect URI**, select **Web** and type `<app-url>/.auth/login/aad/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/aad/callback`. +1. In **Supported account types**, select the account type that can access this application. +1. In the **Redirect URIs** section, select **Web** for platform and type `<app-url>/.auth/login/aad/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/aad/callback`. 1. Select **Register**. 1. After the app registration is created, copy the **Application (client) ID** and the **Directory (tenant) ID** for later.-1. Select **Authentication**. Under **Implicit grant and hybrid flows**, enable **ID tokens** to allow OpenID Connect user sign-ins from App Service. Select **Save**. -1. (Optional) Select **Branding**. In **Home page URL**, enter the URL of your App Service app and select **Save**. -1. Select **Expose an API**, and click **Set** next to "Application ID URI". This value uniquely identifies the application when it is used as a resource, allowing tokens to be requested that grant access. It is used as a prefix for scopes you create. +1. Under **Implicit grant and hybrid flows**, enable **ID tokens** to allow OpenID Connect user sign-ins from App Service. Select **Save**. +1. (Optional) From the left navigation, select **Branding & properties**. In **Home page URL**, enter the URL of your App Service app and select **Save**. +1. From the left navigation, select **Expose an API** > **Set** > **Save**. This value uniquely identifies the application when it is used as a resource, allowing tokens to be requested that grant access. It is used as a prefix for scopes you create. For a single-tenant app, you can use the default value, which is in the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri). - The value is automatically saved. - 1. Select **Add a scope**.- 1. In **Add a scope**, the **Application ID URI** is the value you set in a previous step. Select **Save and continue**. 1. In **Scope name**, enter *user_impersonation*.+ 1. In **Who can consent**, select **Admins and users** if you want to allow users to consent to this scope. 1. In the text boxes, enter the consent scope name and description you want users to see on the consent page. For example, enter *Access <application-name>*. 1. Select **Add scope**.-1. (Optional) To create a client secret, select **Certificates & secrets** > **Client secrets** > **New client secret**. Enter a description and expiration and select **Add**. Copy the client secret value shown in the page. It won't be shown again. +1. (Optional) To create a client secret: + 1. From the left navigation, select **Certificates & secrets** > **Client secrets** > **New client secret**. + 1. Enter a description and expiration and select **Add**. + 1. In the **Value** field, copy the client secret value. It won't be shown again once you navigate away from this page. 1. (Optional) To add multiple **Reply URLs**, select **Authentication**. -### <a name="secrets"> </a>Enable Azure Active Directory in your App Service app +#### <a name="secrets"> </a>Step 2: Enable Azure Active Directory in your App Service app 1. Sign in to the [Azure portal] and navigate to your app.-1. Select **Authentication** in the menu on the left. Click **Add identity provider**. -1. Select **Microsoft** in the identity provider dropdown. -1. For **App registration type**, you can choose to **Pick an existing app registration in this directory** which will automatically gather the necessary app information. If your registration is from another tenant or you do not have permission to view the registration object, choose **Provide the details of an existing app registration**. For this option, you will need to fill in the following configuration details: -- |Field|Description| - |-|-| - |Application (client) ID| Use the **Application (client) ID** of the app registration. | - |Client Secret| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the EasyAuth token store.| - |Issuer Url| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1, omit `/v2.0` in the URL.| - |Allowed Token Audiences| The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If this is a cloud or server app and you want to accept authentication tokens from a client App Service app (the authentication token can be retrieved in the [X-MS-TOKEN-AAD-ID-TOKEN header](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)), add the **Application (client) ID** of the client app here. | -- The client secret will be stored as a slot-sticky [application setting](./configure-common.md#configure-app-settings) named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault. -+1. From the left navigation, select **Authentication** > **Add identity provider** > **Microsoft**. +1. For **App registration type**, choose one of the following: + - **Pick an existing app registration in this directory**: Choose an app registration from the current tenant and automatically gather the necessary app information. + - **Provide the details of an existing app registration**: Specify details for an app registration from another tenant or if your account does not have permission in the current tenant to query the registrations. For this option, you will need to fill in the following configuration details: ++ |Field|Description| + |-|-| + |Application (client) ID| Use the **Application (client) ID** of the app registration. | + |Client Secret| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the EasyAuth token store.| + |Issuer Url| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1, omit `/v2.0` in the URL.| + |Allowed Token Audiences| The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If this is a cloud or server app and you want to accept authentication tokens from a client App Service app (the authentication token can be retrieved in the [X-MS-TOKEN-AAD-ID-TOKEN](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)) header, add the **Application (client) ID** of the client app here. | + + The client secret will be stored as a slot-sticky [application setting](./configure-common.md#configure-app-settings) named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault. + 1. If this is the first identity provider configured for the application, you will also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step. These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to log in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow). To register the app, perform the following steps: You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. -## Additional validations (optional) +## Add customized authorization policy -The steps defined above allow you to authenticate incoming requests for your Azure AD tenant. This allows anyone within the tenant to access the application, which is fine for many applications. However, some applications need to restrict access further by making authorization decisions. Your application code is often the best place to handle custom authorization logic. However, for common scenarios, the platform provides built-in checks that you can use to limit access. +The created app registration authenticates incoming requests for your Azure AD tenant. By default, it also lets anyone within the tenant to access the application, which is fine for many applications. However, some applications need to restrict access further by making authorization decisions. Your application code is often the best place to handle custom authorization logic. However, for common scenarios, the Microsoft identity platform provides built-in checks that you can use to limit access. This section shows how to enable built-in checks using the [App Service authentication V2 API](./configure-authentication-api-version.md). Currently, the only way to configure these built-in checks is via [Azure Resource Manager templates](/azure/templates/microsoft.web/sites/config-authsettingsv2) or the [REST API](/rest/api/appservice/web-apps/update-auth-settings-v2). -Within the API object, the Azure Active Directory identity provider configuration has a `valdation` section that can include a `defaultAuthorizationPolicy` object as in the following structure: +Within the API object, the Azure Active Directory identity provider configuration has a `validation` section that can include a `defaultAuthorizationPolicy` object as in the following structure: ```json { Requests that fail these built-in checks are given an HTTP `403 Forbidden` respo ## Configure client apps to access your App Service -In the prior section, you registered your App Service or Azure Function to authenticate users. This section explains how to register native client or daemon apps so that they can request access to APIs exposed by your App Service on behalf of users or themselves. Completing the steps in this section is not required if you only wish to authenticate users. +In the prior sections, you registered your App Service or Azure Function to authenticate users. This section explains how to register native clients or daemon apps in Azure AD so that they can request access to APIs exposed by your App Service on behalf of users or themselves, such as in an N-tier architecture. Completing the steps in this section is not required if you only wish to authenticate users. ### Native client application You can register native clients to request access your App Service app's APIs on behalf of a signed in user. -1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. +1. From the portal menu, select **Azure Active Directory**. +1. From the left navigation, select **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your app registration. 1. In **Redirect URI**, select **Public client (mobile & desktop)** and type the URL `<app-url>/.auth/login/aad/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/aad/callback`.+1. Select **Register**. +1. After the app registration is created, copy the value of **Application (client) ID**. > [!NOTE] > For a Microsoft Store application, use the [package SID](/previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-how-to-use-client-library#package-sid) as the URI instead.-1. Select **Create**. -1. After the app registration is created, copy the value of **Application (client) ID**. -1. Select **API permissions** > **Add a permission** > **My APIs**. +1. From the left navigation, select **API permissions** > **Add a permission** > **My APIs**. 1. Select the app registration you created earlier for your App Service app. If you don't see the app registration, make sure that you've added the **user_impersonation** scope in [Create an app registration in Azure AD for your App Service app](#register). 1. Under **Delegated permissions**, select **user_impersonation**, and then select **Add permissions**. You have now configured a native client application that can request access your ### Daemon client application (service-to-service calls) -Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) grant. +In an N-tier architecture, your client application can acquire a token to call an App Service or Function app on behalf of the client app itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) grant. -1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. -1. In the **Register an application** page, enter a **Name** for your daemon app registration. +1. From the portal menu, select **Azure Active Directory**. +1. From the left navigation, select **App registrations** > **New registration**. +1. In the **Register an application** page, enter a **Name** for your app registration. 1. For a daemon application, you don't need a Redirect URI so you can keep that empty.-1. Select **Create**. +1. Select **Register**. 1. After the app registration is created, copy the value of **Application (client) ID**.-1. Select **Certificates & secrets** > **New client secret** > **Add**. Copy the client secret value shown in the page. It won't be shown again. +1. From the left navigation, select **Certificates & secrets** > **Client secrets** > **New client secret**. +1. Enter a description and expiration and select **Add**. +1. In the **Value** field, copy the client secret value. It won't be shown again once you navigate away from this page. -You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and App Service Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated. +You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and App Service authentication will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated. At present, this allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must perform some additional configuration. At present, this allows _any_ client application in your Azure AD tenant to requ 1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**. 1. Make sure to click **Grant admin consent** to authorize the client application to request the permission. 1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.-1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service Authentication / Authorization). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code). +1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service authentication). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code). You have now configured a daemon client application that can access your App Service app using its own identity. -> [!NOTE] -> The access tokens provided to your app via EasyAuth do not have scopes for other APIs, such as Graph, even if your application has permissions to access those APIs. To use these APIs, you will need to use Azure Resource Manager to configure the token returned so it can be used to authenticate to other services. For more information, see [Tutorial: Access Microsoft Graph from a secured .NET app as the user](./scenario-secure-app-access-microsoft-graph-as-user.md?tabs=azure-resource-explorer) . - ## Best practices Regardless of the configuration you use to set up authentication, the following best practices will keep your tenant and applications more secure: +- Configure each App Service app with its own app registration in Azure AD. - Give each App Service app its own permissions and consent.-- Configure each App Service app with its own registration. - Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app. ## <a name="related-content"> </a>Next steps |
app-service | Configure Connect To Azure Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md | To validate that the Azure Storage is mounted successfully for the app: - If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount. -- When using Azure Storage [private endpoints](../storage/common/storage-private-endpoints.md) with the app, you need to [enable the **Route All** setting](configure-vnet-integration-routing.md).- - When VNET integration is used, ensure app setting, `WEBSITE_CONTENTOVERVNET` is set to `1` and the following ports are open: - Azure Files: 80 and 445 To validate that the Azure Storage is mounted successfully for the app: - It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks. - If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount. - -- When using Azure Storage [private endpoints](../storage/common/storage-private-endpoints.md) with the app, you need to [enable the **Route All** setting](configure-vnet-integration-routing.md). - > [!NOTE] - > In App Service environment V3, the **Route All** setting is disabled by default and must be explicitly enabled. ::: zone-end ::: zone pivot="container-linux" To validate that the Azure Storage is mounted successfully for the app: - It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks. -- When using Azure Storage [private endpoints](../storage/common/storage-private-endpoints.md) with the app, you need to [enable the **Route All** setting](configure-vnet-integration-routing.md).- - If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount. ::: zone-end |
app-service | Troubleshoot Intermittent Outbound Connection Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md | Although PHP does not support connection pooling, you can try using persistent d * [PHP Connection Management](https://www.php.net/manual/en/pdo.connections.php) +#### Python ++Below are the popular databases and modules for connection pooling which contain examples for how to implement them. ++* [MySQL](https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html) +* [MariaDB](https://mariadb.com/docs/ent/connect/programming-languages/python/connection-pools/) +* [PostgreSQL](https://www.psycopg.org/docs/pool.html) +* [Pyodbc](https://github.com/mkleehammer/pyodbc/wiki/The-pyodbc-Module#pooling) +* [SQLAlchemy](https://docs.sqlalchemy.org/en/20/core/pooling.html) ++HTTP Connection Pooling ++ * Keep-alive and HTTP connection pooling are enabled by default in [Requests](https://requests.readthedocs.io/en/latest/user/advanced/#keep-alive) module. + * [Urllib3](https://urllib3.readthedocs.io/en/stable/reference/urllib3.connectionpool.html) + ### Modify the application to reuse connections * For additional pointers and examples on managing connections in Azure functions, review [Manage connections in Azure Functions](../azure-functions/manage-connections.md). |
applied-ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md | Form Recognizer service is updated on an ongoing basis. Bookmark this page to st ## January 2023 +> [!TIP] +> All January 2023 updates are available with [REST API version **2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument). + * **[Prebuilt receipt model](concept-receipt.md#supported-languages-and-locales-v30) ΓÇöadditional language support**: The **prebuilt receipt model** now has added support for the following languages: Form Recognizer service is updated on an ongoing basis. Bookmark this page to st The **prebuilt ID document model** now has added support for the following document types: - * Passport, driver's license, and residence permit ID expansion. - * US military ID - * India ID - * Australia ID - * Canada ID - * United Kingdom ID + * Passport, driver's license, and residence permit ID expansion + * US military ID cards and documents + * India ID cards and documents + * Australia ID cards and documents + * Canada ID cards and documents + * United Kingdom ID cards and documents ## December 2022 |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | This article identifies the component versions with each release of Azure Arc-en |Container images tag |`v1.15.0_2023-01-10`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`telemetrycollectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3 *use to be otelcollectors*<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3, v1beta4<br/>`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| |Azure Resource Manager (ARM) API version|2022-06-15-preview|-|`arcdata` Azure CLI extension version|1.4.9 ([Download](https://aka.ms/az-cli-arcdata-ext))| -|Arc-enabled Kubernetes helm chart extension version|1.14.0| +|`arcdata` Azure CLI extension version|1.4.10 ([Download](https://aka.ms/az-cli-arcdata-ext))| +|Arc-enabled Kubernetes helm chart extension version|1.15.0| |Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|*No Changes*<br/>1.7.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.7.0 ([Download](https://aka.ms/ads-azcli-ext))| ## December 13, 2022 This release introduces general availability for Azure Arc-enabled SQL Managed I |`arcdata` Azure CLI extension version | 1.0 | |Arc enabled Kubernetes helm chart extension version | 1.0.16701001, release train: stable | |Arc Data extension for Azure Data Studio | 0.9.5 |+ |
azure-arc | What Is Azure Arc Enabled Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/what-is-azure-arc-enabled-postgresql.md | Last updated 11/03/2021 - # What is Azure Arc-enabled PostgreSQL server [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] -## What is Azure Arc vs Azure Arc-enabled data services vs Azure Arc-enabled PostgreSQL server? --**Azure Arc** is one of the pillars of the Azure Hybrid family: Azure Arc, Azure Stack, and Azure IoT. Azure Arc helps customers manage the complexity of their hybrid deployments by simplifying the customer experience. -With Azure Stack, Microsoft or its partners provide the hardware and the software (an appliance). With Azure Arc, Microsoft provides the software only. The customer or its partners provide the supporting infrastructure and operate the solution. Azure Arc is supported on Azure Stack. -Azure Arc makes it possible for you to run Azure services on infrastructures that reside outside of Azure data centers and allows you to integrate with other Azure managed services if you wish. --**Azure Arc-enabled data services** is a part of Azure Arc. It is a suite of products and services that allows customers to manage their data. It allows customers to: --- Run Azure data services on any physical infrastructure-- Optimize your operations by using the same cloud technology everywhere-- Optimize your application developments by using the same underlying technology no matter where your application or database is hosted (in Azure PaaS or in Azure Arc)-- Use cloud technologies in your own data center and yet meet regulatory requirements (data residency & customer control). In other words, "If you cannot come to the cloud, the cloud is coming to you."--Some of the values that Azure Arc-enabled data services provide to you include: -- Always current-- Elastic scale-- Self-service provisioning-- Unified management-- Cloud billing-- Support for connected (to Azure) and occasionally connected (to Azure) scenarios. (direct vs. indirect connectivity modes) **Azure Arc-enabled PostgreSQL server** is one of the database engines available as part of Azure Arc-enabled data services. - ## Compare Postgres solutions provided by Microsoft in Azure Microsoft offers Postgres database services in Azure in two ways:-- As a managed service in Azure PaaS (Platform As A Service)+- As a managed service in **[Azure PaaS](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)** (Platform As A Service) - As a semi-managed service with Azure Arc as it is operated by customers or their partners/vendors -### In Azure PaaS -**In [Azure PaaS](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)**, Microsoft offers several deployment options for PostgreSQL as a managed service: -- :::column::: - Azure Database for PostgreSQL Single server and Azure Database for PostgreSQL Flexible server. These services are Microsoft managed single-node/single instance Postgres form factor. Azure Database for PostgreSQL Flexible server is the most recent evolution of this service. - :::column-end::: - :::column::: - :::image type="content" source="media/postgres-hyperscale/azure-database-for-postgresql-bigger.png" alt-text="Azure Database for PostgreSQL"::: - :::column-end::: - :::column::: - Azure Database for PostgreSQL server. This service is the Microsoft managed multi-nodes/multi-instances Postgres form factor. It is powered by the Citus extension to Postgres that transforms the single node Postgres into a distributed database system. As you scale it out, it distributes the data and the queries that potentially allows your workload to reach unprecedented levels of scale and performance. The application sees a single Postgres instance also known as a server group. However, under the hood, this server group is constituted of several Postgres instances that work together. When you scale it out, you increase the number of Postgres instances within the server group that potentially improves the performance and scalability of your workload. You decide, depending on your needs and the characteristics of the workload, how many Postgres instances you add to the server group. - :::column-end::: - :::column::: - :::image type="content" source="media/postgres-hyperscale/postgresql-hyperscale.png" alt-text="Azure Database for PostgreSQL server"::: - :::column-end::: --+### Features -### With Azure Arc -- :::column::: - **With Azure Arc**, Microsoft offers **a single** Postgres product/service: **Azure Arc-enabled PostgreSQL server**. With Azure Arc, we simplified the product definition and the customer experience for PostgreSQL compared to Azure PaaS by providing **one Postgres product** that is capable of: - - deploying single-node/single-instance Postgres like Azure Database for PostgreSQL Single/Flexible server, - - deploying multi-nodes/multi-instances Postgres like Azure Database for PostgreSQL server, - - great flexibility by allowing customers to morph their Postgres deployments from one-node to multi-nodes of Postgres and vice versa if they desire so. They are able to do so with no data migration and with a simple experience. - :::column-end::: - :::column::: - :::image type="content" source="media/postgres-hyperscale/postgresql-hyperscale-arc.png" alt-text="Azure Arc-enabled PostgreSQL server"::: - :::column-end::: --Like its sibling in Azure PaaS, in its multi-nodes/instances form, Postgres is powered by the Citus extension that transforms the single node Postgres into a distributed database system. As you scale it out, it distributes the data and the queries which potentially allow your workload to reach unprecedented levels of scale and performances. The application sees a single Postgres instance also known as a server group. However, under the hood, this server group is constituted of several Postgres instances that work together. When you scale it out you increase the number of Postgres instances within the server group which potentially improves the performance and scalability of your workload. You decide, depending on your needs and the characteristics of the workload, how many Postgres instances you add to the server group. If you desire so, you may reduce the number of Postgres instances in the server group down to 1. ---With the Direct connectivity mode offered by Azure Arc-enabled data services you may deploy Azure Arc-enabled PostgreSQL server from the Azure portal. If you use the indirect connect mode, you will deploy Azure Arc-enabled PostgreSQL server from the infrastructure that hosts it. --**With Azure Arc-enabled PostgreSQL server, you can:** - Manage Postgres simply- - Provision/de-provision Postgres instances with one command - - At any scale: scale up/down -- Simplify monitoring, failover, backup, patching/upgrade, access control & more-- Build Postgres apps at unprecedented scale & performance- - Scale out compute horizontally across multiple Postgres instances - - Distribute data and queries - - Run the Citus extension - - Transform standard PostgreSQL into a distributed database system -- Deploy Postgres on any infrastructure- - On-premises, multi-cloud (AWS, GCP, Azure), edge +- Simplify monitoring, back up, patching/upgrade, access control & more +- Deploy Postgres on any [Kubernetes](https://kubernetes.io/) infrastructure + - On-premises + - Cloud providers like AWS, GCP, and Azure + - Edge deployments (including lightweight Kubernetes [K3S](https://k3s.io/)) - Integrate with Azure (optional)+ - Direct connectivity mode - Deploy Azure Arc-enabled PostgreSQL server from the Azure portal + - Indirect connectivity mode - Deploy Azure Arc-enabled PostgreSQL server from the infrastructure that hosts it - Pay for what you use (per usage billing)-- Get support from Microsoft on Postgres--**Additional considerations:** -- Azure Arc-enabled PostgreSQL server is not a new database engine or is not a specific version of an existing database engine. It is the same database engine that runs in Azure PaaS. Remember, with Azure Arc, if you cannot come to the Microsoft cloud; the Microsoft cloud is coming to you. The innovation with Azure Arc resides in how Microsoft offers this database engine and in the experiences Microsoft provides around this database engine. +- Get support from Microsoft on PostgreSQL -- Azure Arc-enabled PostgreSQL server is not a data replication solution either. Your business data stays in your Arc deployment. It is not replicated to the Azure cloud. Unless you chose to set up a feature of the database engine, like data replication/read replicas. In that case, your data may be replicated outside of your Postgres deployment: not because of Azure Arc but because you chose to set up a data replication feature.+## Architecture -- You do not need to use specific a driver or provider for your workload to run against Azure Arc-enabled PostgreSQL server. Any "Postgres application" should be able to run against Azure Arc-enabled PostgreSQL server.+Azure Arc-enabled PostgreSQL server is the community version of the [PostgreSQL 14](https://www.postgresql.org/) server with a curated set of available extensions. Most PostgreSQL applications workloads should be capable of running against Azure Arc-enabled PostgreSQL server using standard drivers. -- The scale-out and scale-in operations are not automatic. They are controlled by the users. Users may script these operations and automate the execution of those scripts. Not all workloads can benefit from scaling out. Read further details on this topic as suggested in the "Next steps" section.--## Roles and responsibilities: Azure managed services (Platform as a service (PaaS)) _vs._ Azure Arc-enabled data services ## Next steps-- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. --- **Deploy it, create your own.** Follow these steps to create on your own Kubernetes cluster: - 1. [Install the client tools](install-client-tools.md) - 2. [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) - 3. [Create an Azure Database for PostgreSQL server on Azure Arc](create-postgresql-server.md) -- **Learn**- - [Azure Arc](https://aka.ms/azurearc) - - Azure Arc-enabled data services [here](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) and [here](overview.md) - - [Connectivity modes and requirements](connectivity.md) +### Try it out +Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. +### Deploy -- **Read the concepts and How-to guides of Azure Database for PostgreSQL server to distribute your data across multiple PostgreSQL server nodes and to potentially benefit from better performances**:- * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md) - * [Determine application type](../../postgresql/hyperscale/howto-app-type.md) - * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md) - * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) - * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) - * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)* - * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)* +Follow these steps to create on your own Kubernetes cluster: +- [Install the client tools](install-client-tools.md) +- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) +- [Create an Azure Arc-enabled PostgreSQL server on Azure Arc](create-postgresql-server.md) +### Learn +- [Azure Arc](https://aka.ms/azurearc) +- [Azure Arc-enabled Data Services overview](overview.md) +- [Azure Arc Hybrid Data Services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) +- [Connectivity modes](connectivity.md) |
azure-arc | Conceptual Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md | Title: "Cluster extensions - Azure Arc-enabled Kubernetes" Previously updated : 07/12/2022 Last updated : 01/23/2023 description: "This article provides a conceptual overview of the Azure Arc-enabled Kubernetes cluster extensions capability." description: "This article provides a conceptual overview of the Azure Arc-enabl [Helm charts](https://helm.sh/) help you manage Kubernetes applications by providing the building blocks needed to define, install, and upgrade even the most complex Kubernetes applications. The cluster extension feature builds on top of the packaging components of Helm by providing an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your Kubernetes cluster. -A cluster operator or admin can use the cluster extensions feature to: +A cluster operator or admin can [use the cluster extensions feature](extensions.md) to: -- Install and manage key management, data, and application offerings on your Kubernetes cluster. List of available extensions can be found [here](extensions.md#currently-available-extensions)+- Install and manage key management, data, and application offerings on your Kubernetes cluster. - Use Azure Policy to automate at-scale deployment of cluster extensions across all clusters in your environment. - Subscribe to release trains (for example, preview or stable) for each extension. - Set up auto-upgrade for extensions or pin to a specific version and manually upgrade versions. - Update extension properties or delete extension instances. -An extension can be [cluster-scoped or scoped to a namespace](extensions.md#extension-scope). Each extension type (such as Azure Monitor for containers, Microsoft Defender for Cloud, Azure App services) defines the scope at which they operate on the cluster. +For a list of all currently supported extensions, see [Available extensions for Azure Arc-enabled Kubernetes clusters](extensions-release.md). ## Architecture -[  ](./media/conceptual-extensions.png#lightbox) +[](./media/conceptual-extensions.png#lightbox) The cluster extension instance is created as an extension Azure Resource Manager resource (`Microsoft.KubernetesConfiguration/extensions`) on top of the Azure Arc-enabled Kubernetes resource (represented by `Microsoft.Kubernetes/connectedClusters`) in Azure Resource Manager. This representation in Azure Resource Manager allows you to author a policy that checks for all the Azure Arc-enabled Kubernetes resources with or without a specific cluster extension. Once you've determined which clusters are missing the cluster extensions with desired property values, you can remediate these non-compliant resources using Azure Policy. -The `config-agent` running in your cluster tracks new and updated extension resources on the Azure Arc-enabled Kubernetes resource. The `extensions-manager` agent running in your cluster reads the extension type that needs to be installed and pulls the associated Helm chart from Azure Container Registry or Microsoft Container Registry and installs it on the cluster. +The `config-agent` running in your cluster tracks new and updated extension resources on the Azure Arc-enabled Kubernetes resource. The `extensions-manager` agent running in your cluster reads the extension type that needs to be installed and pulls the associated Helm chart from Azure Container Registry or Microsoft Container Registry and installs it on the cluster. -Both the `config-agent` and `extensions-manager` components running in the cluster handle extension instance updates, version updates and extension instance deletion. These agents use the system-assigned managed identity of the cluster to securely communicate with Azure services. +Both the `config-agent` and `extensions-manager` components running in the cluster handle extension instance updates, version updates and extension instance deletion. These agents use the system-assigned managed identity of the cluster to securely communicate with Azure services. > [!NOTE] > `config-agent` checks for new or updated extension instances on top of Azure Arc-enabled Kubernetes cluster. The agents require connectivity for the desired state of the extension to be pulled down to the cluster. If agents are unable to connect to Azure, propagation of the desired state to the cluster is delayed. > > Protected configuration settings for an extension instance are stored for up to 48 hours in the Azure Arc-enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension changes from a `Pending` state to `Failed` state. To prevent this, we recommend bringing clusters online regularly. +## Extension scope ++Each extension type defines the scope at which they operate on the cluster. Extension installations on Arc-enabled Kubernetes clusters are either *cluster-scoped* or *namespace-scoped*. ++A cluster-scoped extension will be installed in the `release-namespace` specified during extension creation. Typically, only one instance of the cluster-scoped extension and its components, such as pods, operators, and Custom Resource Definitions (CRDs), are installed in the release namespace on the cluster. ++A namespace-scoped extension can be installed in a given namespace provided using the `ΓÇônamespace` property. Since the extension can be deployed at a namespace scope, multiple instances of the namespace-scoped extension and its components can run on the cluster. Each extension instance has permissions on the namespace where it is deployed to. All the above extensions are cluster-scoped except Event Grid on Kubernetes. ++All of the [currently available extensions](extensions-release.md) are cluster-scoped, except for [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) . + ## Next steps - Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). |
azure-arc | Custom Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md | To resolve this issue, modify your network policy to allow pod-to-pod internal c - Securely connect to the cluster using [Cluster Connect](cluster-connect.md). - Continue with [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) for end-to-end instructions on installing extensions, creating custom locations, and creating the App Service Kubernetes environment. - Create an Event Grid topic and an event subscription for [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).-- Learn more about currently available [Azure Arc-enabled Kubernetes extensions](extensions.md#currently-available-extensions).+- Learn more about currently available [Azure Arc-enabled Kubernetes extensions](extensions-release.md). |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | + + Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Last updated : 01/23/2023++description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." +++# Available extensions for Azure Arc-enabled Kubernetes clusters ++[Cluster extensions for Azure Arc-enabled Kubernetes](conceptual-extensions.md) provide an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your cluster. These extensions can be [deployed to your clusters](extensions.md) to enable different scenarios and improve cluster management. ++The following extensions are currently available for use with Arc-enabled Kubernetes clusters. All of these extensions are [cluster-scoped](conceptual-extensions.md#extension-scope), except for Azure API Management on Azure Arc, which is namespace-scoped. ++> [!NOTE] +> Installing Azure Arc extensions on [Azure Kubernetes Service (AKS) hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions. ++## Azure Monitor Container Insights ++Azure Monitor Container Insights provides visibility into the performance of workloads deployed on the Kubernetes cluster. Use this extension to collect memory and CPU utilization metrics from controllers, nodes, and containers. ++For more information, see [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). ++## Azure Policy ++Azure Policy extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper), an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. ++For more information, see [Understand Azure Policy for Kubernetes clusters](../../governance/policy/concepts/policy-for-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). ++## Azure Key Vault Secrets Provider ++The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets. ++For more information, see [Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters](tutorial-akv-secrets-provider.md). ++## Microsoft Defender for Containers ++Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. It gathers information related to security like audit log data from the Kubernetes cluster, and provides recommendations and threat alerts based on gathered data. ++For more information, see [Enable Microsoft Defender for Containers](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json). ++> [!IMPORTANT] +> Defender for Containers support for Arc-enabled Kubernetes clusters is currently in public preview. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Azure Arc-enabled Open Service Mesh ++[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. ++For more information, see [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md). ++## Azure Arc-enabled Data Services ++Makes it possible for you to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. This extension enables the *custom locations* feature, providing a way to configure Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. ++For more information, see [Azure Arc-enabled Data Services](../dat#create-custom-location). ++## Azure App Service on Azure Arc ++Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. ++For more information, see [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../../app-service/overview-arc-integration.md). ++> [!IMPORTANT] +> App Service on Azure Arc is currently in public preview. Review the [public preview limitations for App Service Kubernetes environments](../../app-service/overview-arc-integration.md#public-preview-limitations) before deploying this extension. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Azure Event Grid on Kubernetes ++Event Grid is an event broker used to integrate workloads that use event-driven architectures. This extension lets you create and manage Event Grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. ++For more information, see [Event Grid on Kubernetes with Azure Arc (Preview)](../../event-grid/kubernetes/overview.md). ++> [!IMPORTANT] +> Event Grid on Kubernetes with Azure Arc is currently in public preview. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Azure API Management on Azure Arc ++With the integration between Azure API Management and Azure Arc on Kubernetes, you can deploy the API Management gateway component as an extension in an Azure Arc-enabled Kubernetes cluster. This extension is [namespace-scoped](conceptual-extensions.md#extension-scope), not cluster-scoped. ++For more information, see [Deploy an Azure API Management gateway on Azure Arc (preview)](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md). ++> [!IMPORTANT] +> API Management self-hosted gateway on Azure Arc is currently in public preview. During preview, the API Management gateway extension is available in the following regions: West Europe, East US. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Azure Arc-enabled Machine Learning ++The AzureML extension lets you deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. ++For more information, see [Introduction to Kubernetes compute target in AzureML](../../machine-learning/how-to-attach-kubernetes-anywhere.md) and [Deploy AzureML extension on AKS or Arc Kubernetes cluster](../../machine-learning/how-to-deploy-kubernetes-extension.md). ++## Flux (GitOps) ++[GitOps on Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md) uses [Flux v2](https://fluxcd.io/docs/), a popular open-source tool set, to help manage cluster configuration and application deployment. GitOps is enabled in the cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` cluster extension resource. ++For more information, see [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md). ++The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. ++### 1.6.3 (December 2022) ++Flux version: [Release v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0) ++- source-controller: v0.32.1 +- kustomize-controller: v0.31.0 +- helm-controller: v0.27.0 +- notification-controller: v0.29.0 +- image-automation-controller: v0.27.0 +- image-reflector-controller: v0.23.0 ++Changes made for this version: ++- Upgrades Flux to [v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0) +- Adds exception for [aad-pod-identity in flux extension](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-azure-ad-pod-identity-enabled) +- Enables reconciler for flux extension ++### 1.6.1 (October 2022) ++Flux version: [Release v0.35.0](https://github.com/fluxcd/flux2/releases/tag/v0.35.0) ++- source-controller: v0.30.1 +- kustomize-controller: v0.29.0 +- helm-controller: v0.25.0 +- notification-controller: v0.27.0 +- image-automation-controller: v0.26.0 +- image-reflector-controller: v0.22.0 ++Changes made for this version: ++- Upgrades Flux to [v0.35.0](https://github.com/fluxcd/flux2/releases/tag/v0.35.0) +- Implements fix for a security issue where some Flux controllers could be vulnerable to a denial of service attack. Users that have permissions to change Flux's objects, either through a Flux source or directly within a cluster, could provide invalid data to fields `spec.Interval` or `spec.Timeout` (and structured variations of these fields), causing the entire object type to stop being processed. This issue had two root causes: [Kubernetes type `metav1.Duration` not being fully compatible with the Go type `time.Duration`](https://github.com/kubernetes/apimachinery/issues/131), or a lack of validation within Flux to restrict allowed values. +- Adds support for [installing the `microsoft.flux` extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled) +- Fixes bug where [deleting the extension may fail on AKS with Windows node pool](https://github.com/Azure/AKS/issues/3191) +- Adds support for sasToken for Azure blob storage at account level as well as container level ++### 1.6.0 (September 2022) ++Flux version: [Release v0.33.0](https://github.com/fluxcd/flux2/releases/tag/v0.33.0) ++- source-controller: v0.28.0 +- kustomize-controller: v0.27.1 +- helm-controller: v0.23.1 +- notification-controller: v0.25.2 +- image-automation-controller: v0.24.2 +- image-reflector-controller: v0.20.1 ++Changes made for this version: ++- Upgrades Flux to [v0.33.0](https://github.com/fluxcd/flux2/releases/tag/v0.33.0) +- Fixes Helm-related [security issue](https://github.com/fluxcd/flux2/security/advisories/GHSA-p2g7-xwvr-rrw3) ++## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes ++[Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. The Dapr extension eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. ++For more information, see [Dapr extension for AKS and Arc-enabled Kubernetes](../../aks/dapr.md). ++## Next steps ++- Read more about [cluster extensions for Azure Arc-enabled Kubernetes](conceptual-extensions.md). +- Learn how to [deploy extensions to an Arc-enabled Kubernetes cluster](extensions.md). |
azure-arc | Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md | Title: "Azure Arc-enabled Kubernetes cluster extensions" Previously updated : 10/12/2022 Last updated : 01/23/2023 description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes clusters." The Kubernetes extensions feature enables the following on Azure Arc-enabled Kub In this article, you learn: > [!div class="checklist"] -> * Which Azure Arc-enabled Kubernetes cluster extensions are currently available. > * How to create extension instances. > * Required and optional parameters. > * How to view, list, update, and delete extension instances. -A conceptual overview of this feature is available in [Cluster extensions - Azure Arc-enabled Kubernetes](conceptual-extensions.md). +Before you begin, read the [conceptual overview of Arc-enabled Kubernetes cluster extensions](conceptual-extensions.md) and review the [list of currently available extensions](extensions-release.md). ## Prerequisites A conceptual overview of this feature is available in [Cluster extensions - Azur * If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. -## Currently available extensions --The following extensions are currently available. --| Extension | Description | -| | -- | -| [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers. | -| [Azure Policy](../../governance/policy/concepts/policy-for-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Azure Policy extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper), an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. | -| [Azure Key Vault Secrets Provider](tutorial-akv-secrets-provider.md) | The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. | -| [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Gathers information related to security like audit log data from the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. | -| [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) | Deploys Open Service Mesh on the cluster and enables capabilities like mTLS security, fine grained access control, traffic shifting, monitoring with Azure Monitor or with open source add-ons of Prometheus and Grafana, tracing with Jaeger, integration with external certification management solution. | -| [Azure Arc-enabled Data Services](../../azure-arc/kubernetes/custom-locations.md#create-custom-location) | Makes it possible for you to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. | -| [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) | Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. | -| [Azure Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) | Create and manage event grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. | -| [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) | Deploy and manage API Management gateway on Azure Arc-enabled Kubernetes clusters. | -| [Azure Arc-enabled Machine Learning](../../machine-learning/how-to-attach-kubernetes-anywhere.md) | Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. | -| [Flux (GitOps)](./conceptual-gitops-flux2.md) | Use GitOps with Flux to manage cluster configuration and application deployment. | -| [Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes](../../aks/dapr.md)| Eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. | - > [!NOTE] > Installing Azure Arc extensions on [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions. -### Extension scope --Extension installations on the Arc-enabled Kubernetes cluster are either *cluster-scoped* or *namespace-scoped*. --A cluster-scoped extension will be installed in the `release-namespace` specified during extension creation. Typically, only one instance of the cluster-scoped extension and its components, such as pods, operators, and Custom Resource Definitions (CRDs), are installed in the release namespace on the cluster. --A namespace-scoped extension can be installed in a given namespace provided using the `ΓÇônamespace` property. Since the extension can be deployed at a namespace scope, multiple instances of the namespace-scoped extension and its components can run on the cluster. Each extension instance has permissions on the namespace where it is deployed to. All the above extensions are cluster-scoped except Event Grid on Kubernetes. --All of the extensions listed above are cluster-scoped, except for [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) . - ## Usage of cluster extensions ### Create extensions instance az k8s-extension create --name azuremonitor-containers --extension-type Microso | Parameter name | Description | |-|| | `--name` | Name of the extension instance |-| `--extension-type` | The type of extension you want to install on the cluster. For example: Microsoft.AzureMonitor.Containers, microsoft.azuredefender.kubernetes | +| `--extension-type` | The type of extension you want to install on the cluster. For example: Microsoft.AzureMonitor.Containers, microsoft.azuredefender.kubernetes | | `--scope` | Scope of installation for the extension - `cluster` or `namespace` | | `--cluster-name` | Name of the Azure Arc-enabled Kubernetes resource on which the extension instance has to be created | | `--resource-group` | The resource group containing the Azure Arc-enabled Kubernetes resource | Delete an extension instance on a cluster with `k8s-extension delete`, passing i az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters ``` ->[!NOTE] +> [!NOTE] > The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state. > [!NOTE] az extension update --name k8s-extension ## Next steps -Learn more about the cluster extensions currently available for Azure Arc-enabled Kubernetes: --* [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) -* [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) -* [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) -* [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) -* [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) -* [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) +* Learn more about [how extensions work with Arc-enabled Kubernetes clusters](conceptual-extensions.md). +* Review the [cluster extensions currently available for Azure Arc-enabled Kubernetes](extensions-release.md). |
azure-arc | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md | Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues" Previously updated : 11/04/2022 Last updated : 01/23/2023 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." The extension status also returns as "Failed". The extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure, but the token request is intercepted by the [pod identity](../../aks/use-azure-ad-pod-identity.md)). -The workaround is to create an `AzurePodIdentityException` that will tell Azure AD Pod Identity to ignore the token requests from flux-extension pods. +You can fix this issue by upgrading to the latest version of the `microsoft.flux` extension. For version 1.6.1 or earlier, the workaround is to create an `AzurePodIdentityException` that will tell Azure AD Pod Identity to ignore the token requests from flux-extension pods. ```console apiVersion: aadpodidentity.k8s.io/v1 spec: ### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Kubelet Identity enabled -When working with Azure Kubernetes clusters, one of the authentication options to use is kubelet identity. In order to let Flux use this, add a parameter --config useKubeletIdentity=true at the time of Flux extension installation. +When working with Azure Kubernetes clusters, one of the authentication options is *kubelet identity* using a user-assigned managed identity. Using kubelet identity can reduce operational overhead and increases security when connecting to Azure resources such as Azure Container Registry. ++To let Flux use kubelet identity, add the parameter `--config useKubeletIdentity=true` when installing the Flux extension. This option is supported starting with version 1.6.1 of the extension. ```console az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true az k8s-extension create --resource-group <resource-group> --cluster-name <cluste ### Flux v2 - `microsoft.flux` extension installation CPU and memory limits -The controllers installed in your Kubernetes cluster with the Microsoft.Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes. +The controllers installed in your Kubernetes cluster with the Microsoft Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes. | Container Name | CPU limit | Memory limit | | -- | -- | -- | The controllers installed in your Kubernetes cluster with the Microsoft.Flux ext | fluent-bit | 20m | 150Mi | | helm-controller | 1000m | 1Gi | | source-controller | 1000m | 1Gi |-| kustomize-controller | 1000m | 1Gi | +| kustomize-controller | 1000m | 1Gi | | notification-controller | 1000m | 1Gi | | image-automation-controller | 1000m | 1Gi | | image-reflector-controller | 1000m | 1Gi | If you have enabled a custom or built-in Azure Gatekeeper Policy, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, that limits the resources for containers on Kubernetes clusters, you will need to either ensure that the resource limits on the policy are greater than the limits shown above or the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment. - ## Monitoring Azure Monitor for Containers requires its DaemonSet to run in privileged mode. To successfully set up a Canonical Charmed Kubernetes cluster for monitoring, run the following command: osm-controller-b5bd66db-wglzl 0/1 Evicted 0 61m osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m ``` -Even though one controller was _evicted_ at some point, there's another which is `READY 1/1` and `Running` with `0` restarts. If the column `READY` is anything other than `1/1`, the service mesh would be in a broken state. Column `READY` with `0/1` indicates the control plane container is crashing. Use the following command to inspect controller logs: +Even though one controller was *Evicted* at some point, there's another which is `READY 1/1` and `Running` with `0` restarts. If the column `READY` is anything other than `1/1`, the service mesh would be in a broken state. Column `READY` with `0/1` indicates the control plane container is crashing. Use the following command to inspect controller logs: ```bash kubectl logs -n arc-osm-system -l app=osm-controller kubectl get endpoints -n arc-osm-system osm-injector If the OSM Injector is healthy, you'll see output similar to the following: -``` +```output NAME ENDPOINTS AGE osm-injector 10.240.1.172:9090 75m ``` kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq '.webh ``` A well configured **Mutating** webhook configuration will have output similar to the following:-``` ++```output { "name": "osm-injector", "namespace": "arc-osm-system", kubectl get namespace bookbuyer -o json | jq '.metadata.annotations' The following annotation must be present: -``` +```bash { "openservicemesh.io/sidecar-injection": "enabled" } ``` View the labels of the namespace `bookbuyer`:+ ```bash kubectl get namespace bookbuyer -o json | jq '.metadata.labels' ``` The following label must be present: -``` +```bash { "openservicemesh.io/monitored-by": "osm" } |
azure-arc | Network Requirements Consolidated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md | Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 03/01/2022 Last updated : 01/30/2023 |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | Title: Azure Arc resource bridge (preview) network requirements description: Learn about network requirements for Azure Arc resource bridge (preview) including URLs that must be allowlisted. Previously updated : 12/06/2022 Last updated : 01/30/2023 # Azure Arc resource bridge (preview) network requirements This article describes the networking requirements for deploying Azure Arc resource bridge (preview) in your enterprise. +## Configuration requirements ++### Static Configuration ++Static configuration is recommended for Arc resource bridge because the resource bridge needs three static IPs in the same subnet for the control plane, appliance VM, and reserved appliance VM (for upgrade). The control plane corresponds to the `controlplanenedpoint` parameter, the appliance VM IP to `k8snodeippoolstart`, and reserved appliance VM to `k8snodeippoolend` in the `createconfig` command that creates the bridge configuration files. If using DHCP, reserve those IP addresses, ensuring they are outside the DHCP range. ++### IP Address Prefix ++The subnet of the IP addresses for Arc resource bridge must lie in the IP address prefix that is passed in the `ipaddressprefix` parameter of the `createconfig` command. The IP address prefix is the IP prefix that is exposed by the network to which Arc resource bridge is connected. It is entered as the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation, for example `192.168.7.1/24`. Consult your system or network administrator to obtain the IP address prefix in CIDR notation. An IP Subnet CIDR calculator may be used to obtain this value. ++### Gateway ++The gateway address provided in the `createconfig` command must be in the same subnet specified in the IP address prefix. ++### DNS Server ++DNS Server must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three must be able to reach the required URLs for deployment. ++## General network requirements + [!INCLUDE [network-requirement-principles](../includes/network-requirement-principles.md)] [!INCLUDE [network-requirements](includes/network-requirements.md)] ## Additional network requirements -In addition, resource bridge (preview) requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints). +In addition, resource bridge (preview) requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud). ++> [!NOTE] +> The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md). ++## SSL proxy configuration ++Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. This configuration is handled automatically. However, proxy configuration of the management machine isn't configured by the Azure Arc resource bridge. ++There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted. ++## Exclusion list for no proxy ++The following table contains the list of addresses that must be excluded by using the `-noProxy` parameter in the `createconfig` command. ++| **IP Address** | **Reason for exclusion** | +| -- | | +| localhost, 127.0.0.1 | Localhost traffic | +| .svc | Internal Kubernetes service traffic (.svc) where _.svc_ represents a wildcard name. This is similar to saying \*.svc, but none is used in this schema. | +| 10.0.0.0/8 | private network address space | +| 172.16.0.0/12 |Private network address space - Kubernetes Service CIDR | +| 192.168.0.0/16 | Private network address space - Kubernetes Pod CIDR | +| .contoso.com | You may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. To exclude all addresses in a domain, you must add the domain to the `noProxy` list. Use a leading period rather than a wildcard (\*) character. In the sample, the addresses `.contoso.com` excludes addresses `prefix1.contoso.com`, `prefix2.contoso.com`, and so on. | ++The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16`. While these default values will work for many networks, you may need to add more subnet ranges and/or names to the exemption list. For example, you may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. You can achieve that by specifying the values in the `noProxy` list. ## Next steps |
azure-arc | Vmware Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/vmware-faq.md | The easiest way to think of this is as follows: - Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that itΓÇÖs running on. Since Arc-enabled servers also support bare-metal machines, there may, in fact, not even be a host hypervisor in some cases. -- Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. See [What is Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview.md) to learn more.+- Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. See [What is Azure Arc-enabled VMware vSphere](../vmware-vsphere/overview.md) to learn more. > [!NOTE] > Azure Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Azure Arc-enabled servers. However, during Public Preview, not all [Azure services supported by Azure Arc-enabled servers](./manage-vm-extensions.md) are available for Arc-enabled VMware vSphere - currently, Azure Monitor, Update Management, and Microsoft Defender for Cloud are not supported. In addition, Arc-enabled VMware vSphere is [supported by Azure VMware Solution (AVS)](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). |
azure-cache-for-redis | Cache How To Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md | description: Learn how to import and export data to and from blob storage with y Previously updated : 09/01/2022 Last updated : 01/31/2023 Use import to bring Redis compatible RDB files from any Redis server running in > [!NOTE] > Before beginning the import operation, ensure that your Redis Database (RDB) file or files are uploaded into page or block blobs in Azure storage, in the same region and subscription as your Azure Cache for Redis instance. For more information, see [Get started with Azure Blob storage](../storage/blobs/storage-quickstart-blobs-dotnet.md). If you exported your RDB file using the [Azure Cache for Redis Export](#export) feature, your RDB file is already stored in a page blob and is ready for importing.-> ++> [!IMPORTANT] +> Currently, importing from Redis Enterprise tier to Premium tier is not supported. > 1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. In the working pane, you see **Choose Blob(s)** where you can find RDB files. |
azure-cache-for-redis | Cache How To Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md | -> [!IMPORTANT] -> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023. - Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is similar to regular monthly maintenance. Upgrading follows the same pattern as maintenance: First, the Redis version on the replica node is updated, followed by an update to the primary node. Your client application should treat the upgrade operation exactly like a planned maintenance event. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. Before you upgrade, check the Redis version of a cache by selecting **Properties ## Upgrade using the Azure portal -> [!IMPORTANT] -> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023. +1. In the Azure portal, select the Azure Cache for Redis instance that you want to upgrade from Redis 4 to Redis 6. ++1. On the left side of the screen, select **Advanced settings**. ++1. If your cache instance is eligible to be upgraded, you should see the following blue banner. If you want to proceed, select the text in the banner. ++ :::image type="content" source="media/cache-how-to-upgrade/blue-banner-upgrade-cache.png" alt-text="Screenshot informing you that you can upgrade your cache to Redis 6 with additional features. Upgrading your cache instance cannot be reversed."::: ++1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes** if you would like to upgrade your cache instance. ++ :::image type="content" source="media/cache-how-to-upgrade/dialog-version-upgrade.png" alt-text="Screenshot showing a dialog with more information about upgrading your cache with Yes selected."::: ++1. To check on the status of the upgrade, navigate to **Overview**. ++ :::image type="content" source="media/cache-how-to-upgrade/upgrade-status.png" alt-text="Screenshot showing Overview in the Resource menu. Status shows cache is being upgraded."::: ## Upgrade using Azure CLI -> [!IMPORTANT] -> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023. +To upgrade a cache from 4 to 6 using the Azure CLI, use the following command: ++```azurecli-interactive +az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6 +``` ## Upgrade using PowerShell -> [!IMPORTANT] -> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023. +To upgrade a cache from 4 to 6 using PowerShell, use the following command: ++```powershell-interactive +Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -RedisVersion "6" +``` ## Next steps |
azure-functions | Create First Function Cli Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md | Before you begin, you must have the following: + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. -+ [Node.js](https://nodejs.org/) version 16 or 18 (preview). ++ [Node.js](https://nodejs.org/) version 18 or 16. ### Prerequisite check Each binding requires a direction, a type, and a unique name. The HTTP trigger h # [Azure CLI](#tab/azure-cli) ```azurecli- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> + az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> ``` - The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you're using Node.js 16, also change `--runtime-version` to `16`. + The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 14 -FunctionsVersion 4 -Location <REGION> + New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 18 -FunctionsVersion 4 -Location <REGION> ``` - The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Node.js 16, change `-RuntimeVersion` to `16`. + The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. |
azure-functions | Create First Function Cli Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md | Before you begin, you must have the following: + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. -+ [Node.js](https://nodejs.org/) version 14 or 16 (preview). ++ [Node.js](https://nodejs.org/) version 18 or 16. ### Prerequisite check Each binding requires a direction, a type, and a unique name. The HTTP trigger h # [Azure CLI](#tab/azure-cli) ```azurecli- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> + az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> ``` - The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you're using Node.js 16, also change `--runtime-version` to `16`. + The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 14 -FunctionsVersion 4 -Location '<REGION>' + New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 18 -FunctionsVersion 4 -Location '<REGION>' ``` - The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Node.js 16, change `-RuntimeVersion` to `16`. -+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app. |
azure-functions | Create First Function Vs Code Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md | Before you get started, make sure you have the following requirements in place: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ [Node.js 16.x](https://nodejs.org/en/download/releases/) or [Node.js 18.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version. ++ [Node.js 18.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/). Use the `node --version` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). |
azure-functions | Durable Functions Isolated Create First Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md | + + Title: "Create your first C# durable function running in the isolated worker" +description: Create and publish a C# Azure Durable Function running in the isolated worker using Visual Studio or Visual Studio Code. ++ Last updated : 01/31/2023++zone_pivot_groups: code-editors-set-one +ms.devlang: csharp ++++# Create your first Durable Function in C# ++Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. ++Like Azure Functions, Durable Functions supports two process models for .NET class library functions: +++To learn more about the two processes, refer to [Differences between in-process and isolated worker process .NET Azure Functions](../dotnet-isolated-in-process-differences.md). +++In this article, you learn how to use Visual Studio Code to locally create and test a "hello world" durable function. This function orchestrates and chains together calls to other functions. You can then publish the function code to Azure. These tools are available as part of the Visual Studio Code [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). +++## Prerequisites ++To complete this tutorial: ++* Install [Visual Studio Code](https://code.visualstudio.com/download). ++* Install the following Visual Studio Code extensions: + * [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) + * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) ++* Make sure that you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ++* Durable Functions require an Azure storage account. You need an Azure subscription. ++* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. +++## <a name="create-an-azure-functions-project"></a>Create your local project ++In this section, you use Visual Studio Code to create a local Azure Functions project. ++1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. ++ :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-create-project.png" alt-text="Screenshot of create a function project window."::: ++1. Choose an empty folder location for your project and choose **Select**. ++1. Follow the prompts and provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a language for your function app project | C# | Create a local C# Functions project. | + | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | + | Select a .NET runtime | .NET 7.0 isolated | Creates a function project that supports .NET 7 running in isolated worker process and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). | + | Select a template for your project's first function | Skip for now | | + | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | ++Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. ++## Add NuGet package references ++Add the following to your app project: ++```xml +<ItemGroup> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.10.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.0.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" OutputItemType="Analyzer" /> +</ItemGroup> +``` +## Add functions to the app ++The most basic Durable Functions app contains the following three functions. Add them to a new class in the app: ++```csharp +using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.Functions.Worker; +using Microsoft.DurableTask; +using Microsoft.DurableTask.Client; +using Microsoft.Extensions.Logging; ++static class HelloSequence +{ + [Function(nameof(HelloCities))] + public static async Task<string> HelloCities([OrchestrationTrigger] TaskOrchestrationContext context) + { + string result = ""; + result += await context.CallActivityAsync<string>(nameof(SayHello), "Tokyo") + " "; + result += await context.CallActivityAsync<string>(nameof(SayHello), "London") + " "; + result += await context.CallActivityAsync<string>(nameof(SayHello), "Seattle"); + return result; + } ++ [Function(nameof(SayHello))] + public static string SayHello([ActivityTrigger] string cityName, FunctionContext executionContext) + { + ILogger logger = executionContext.GetLogger(nameof(SayHello)); + logger.LogInformation("Saying hello to {name}", cityName); + return $"Hello, {cityName}!"; + } + + [Function(nameof(StartHelloCities))] + public static async Task<HttpResponseData> StartHelloCities( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, + [DurableClient] DurableTaskClient client, + FunctionContext executionContext) + { + ILogger logger = executionContext.GetLogger(nameof(StartHelloCities)); ++ string instanceId = await client.ScheduleNewOrchestrationInstanceAsync(nameof(HelloCities)); + logger.LogInformation("Created new orchestration with instance ID = {instanceId}", instanceId); ++ return client.CreateCheckStatusResponse(req, instanceId); + } +} +``` +| Method | Description | +| -- | -- | +| **`HelloCities`** | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. | +| **`SayHello`** | The function returns a hello. It's the function that contains the business logic that is being orchestrated. | +| **`StartHelloCities`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | ++## Configure storage ++Your app needs a storage for runtime information. To use [Azurite](https://learn.microsoft.com/azure/storage/common/storage-use-azurite?tabs=visual-studio-code), which is an emulator for Azure Storage, set `AzureWebJobStorage` in _local.settings.json_ to `UseDevelopmentStorage=true`: ++```json +{ + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated" + } +} +``` +You can install the Azurite extension on Visual Studio Code and start it by running `Azurite: Start` in the command palette. ++There are other storage options you can use for your Durable Functions app. See [Durable Functions storage providers](https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-storage-providers) to learn more about different storage options and what benefits they provide. +++## Test the function locally ++Azure Functions Core Tools lets you run an Azure Functions project locally. You're prompted to install these tools the first time you start a function from Visual Studio Code. ++1. To test your function, set a breakpoint in the `SayHello` activity function code and press <kbd>F5</kbd> to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. ++ > [!NOTE] + > For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging). ++1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. ++ :::image type="content" source="media/durable-functions-create-first-csharp/isolated-functions-vscode-debugging.png" alt-text="Screenshot of Azure local output window."::: ++1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), and then send an HTTP POST request to the URL endpoint. ++ The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. ++1. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. ++ The request will query the orchestration instance for the status. You must get an eventual response, which shows us that the instance has completed and includes the outputs or results of the durable function. It looks like: ++ ```json + { + "name":"HelloCities", + "instanceId":"7f99f9474a6641438e5c7169b7ecb3f2", + "runtimeStatus":"Completed", + "input":null, + "customStatus":null, + "output":"Hello, Tokyo! Hello, London! Hello, Seattle!", + "createdTime":"2023-01-31T18:48:49Z", + "lastUpdatedTime":"2023-01-31T18:48:56Z" + } + ``` ++1. To stop debugging, press <kbd>Shift + F5</kbd> in Visual Studio Code. ++After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. ++++## Test your function in Azure ++1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in the following format: ++ `https://<functionappname>.azurewebsites.net/api/HelloOrchestration_HttpStart` ++1. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app. ++## Next steps ++You have used Visual Studio Code to create and publish a C# durable function app. ++> [!div class="nextstepaction"] +> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) ++++In this article, you will learn how to use Visual Studio 2022 to locally create and test a "hello world" durable function that run in the isolated worker process. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2022. +++## Prerequisites ++To complete this tutorial: ++* Install [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps differ. ++* Verify that you have the [Azurite Emulator](../../storage/common//storage-use-azurite.md) installed and running. +++## Create a function app project ++The Azure Functions template creates a project that can be published to a function app in Azure. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources. ++1. In Visual Studio, select **New** > **Project** from the **File** menu. ++2. In the **Create a new project** dialog, search for `functions`, choose the **Azure Functions** template, and then select **Next**. ++ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-isolated-vs-new-project.png" alt-text="Screenshot of new project dialog in Visual Studio."::: ++3. Enter a **Project name** for your project, and select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or nonalphanumeric characters. ++4. Under **Additional information**, use the settings specified in the table that follows the image. ++ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-isolated-vs-new-function.png" alt-text="Screenshot of create a new Azure Functions Application dialog in Visual Studio."::: ++ | Setting | Suggested value | Description | + | | - |-- | + | **Functions worker** | .NET 7 Isolated | Creates a function project that supports .NET 7 running in isolated worker process and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). | + | **Function** | Empty | Creates an empty function app. | + | **Storage account** | Storage Emulator | A storage account is required for durable function state management. | ++5. Select **Create** to create an empty function project. This project has the basic configuration files needed to run your functions. Make sure the box for _"Use Azurite for runtime storage account (AzureWebJobStorage)"_ is checked. This will use Azurite emulator. ++Note that there are other storage options you can use for your Durable Functions app. See [Durable Functions storage providers](https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-storage-providers) to learn more about different storage options and what benefits they provide. ++## Add NuGet package references ++Add the following to your app project: ++```xml +<ItemGroup> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.10.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.0.0" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" OutputItemType="Analyzer" /> +</ItemGroup> +``` +## Add functions to the app ++The most basic Durable Functions app contains the following three functions. Add them to a new class in the app: ++```csharp +using Microsoft.Azure.Functions.Worker.Http; +using Microsoft.Azure.Functions.Worker; +using Microsoft.DurableTask; +using Microsoft.DurableTask.Client; +using Microsoft.Extensions.Logging; ++static class HelloSequence +{ + [Function(nameof(HelloCities))] + public static async Task<string> HelloCities([OrchestrationTrigger] TaskOrchestrationContext context) + { + string result = ""; + result += await context.CallActivityAsync<string>(nameof(SayHello), "Tokyo") + " "; + result += await context.CallActivityAsync<string>(nameof(SayHello), "London") + " "; + result += await context.CallActivityAsync<string>(nameof(SayHello), "Seattle"); + return result; + } ++ [Function(nameof(SayHello))] + public static string SayHello([ActivityTrigger] string cityName, FunctionContext executionContext) + { + ILogger logger = executionContext.GetLogger(nameof(SayHello)); + logger.LogInformation("Saying hello to {name}", cityName); + return $"Hello, {cityName}!"; + } + + [Function(nameof(StartHelloCities))] + public static async Task<HttpResponseData> StartHelloCities( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, + [DurableClient] DurableTaskClient client, + FunctionContext executionContext) + { + ILogger logger = executionContext.GetLogger(nameof(StartHelloCities)); ++ string instanceId = await client.ScheduleNewOrchestrationInstanceAsync(nameof(HelloCities)); + logger.LogInformation("Created new orchestration with instance ID = {instanceId}", instanceId); ++ return client.CreateCheckStatusResponse(req, instanceId); + } +} ++``` +| Method | Description | +| -- | -- | +| **`HelloCities`** | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. | +| **`SayHello`** | The function returns a hello. It's the function that contains the business logic that is being orchestrated. | +| **`StartHelloCities`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | +++## Test the function locally ++Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio. ++1. To test your function, press <kbd>F5</kbd>. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You may also need to enable a firewall exception so that the tools can handle HTTP requests. ++2. Copy the URL of your function from the Azure Functions runtime output. ++ :::image type="content" source="./media/durable-functions-create-first-csharp/isolated-functions-vs-debugging.png" alt-text="Screenshot of Azure local runtime."::: ++3. Paste the URL for the HTTP request into your browser's address bar and execute the request. The following shows the response in the browser to the local GET request returned by the function: ++ :::image type="content" source="./media/durable-functions-create-first-csharp/isolated-functions-vs-status.png" alt-text="Screenshot of the browser window with statusQueryGetUri called out."::: ++ The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. ++4. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. ++ The request will query the orchestration instance for the status. You must get an eventual response that looks like the following. This output shows us the instance has completed and includes the outputs or results of the durable function. ++ ```json + { + "name":"HelloCities", + "instanceId":"668814ac6ce84a43a9e6757f81dbc0bc", + "runtimeStatus":"Completed", + "input":null, + "customStatus":null, + "output":"Hello, Tokyo! Hello, London! Hello Seattle!", + "createdTime":"2023-01-31T16:44:34Z", + "lastUpdatedTime":"2023-01-31T16:44:37Z" + } + ``` ++5. To stop debugging, press <kbd>Shift + F5</kbd>. ++After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. ++## Publish the project to Azure ++You must have a function app in your Azure subscription before publishing your project. You can create a function app right from Visual Studio. +++## Test your function in Azure ++1. Copy the base URL of the function app from the Publish profile page. Replace the `localhost:port` portion of the URL you used when testing the function locally with the new base URL. ++ The URL that calls your durable function HTTP trigger must be in the following format: ++ `https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>_HttpStart` ++2. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app. ++## Next steps ++You have used Visual Studio to create and publish a C# Durable Functions app. ++> [!div class="nextstepaction"] +> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) + |
azure-functions | Functions Add Output Binding Storage Queue Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md | Title: Connect Azure Functions to Azure Storage using Visual Studio Code description: Learn how to connect Azure Functions to an Azure Queue Storage by adding an output binding to your Visual Studio Code project. Previously updated : 06/15/2022 Last updated : 01/31/2023 ms.devlang: csharp, java, javascript, powershell, python, typescript Because you're using the storage connection string, your function connects to th ### Connect Storage Explorer to your account -Skip this section if you have already installed Azure Storage Explorer and connected it to your Azure account. --1. Run the [Azure Storage Explorer](https://storageexplorer.com/) tool, select the connect icon on the left, and select **Add an account**. -- :::image type="content" source="./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-add-account.png" alt-text="Screenshot of how to add an Azure account to Microsoft Azure Storage Explorer."::: --1. In the **Connect** dialog, choose **Add an Azure account**, choose your **Azure environment**, and then select **Sign in...**. -- :::image type="content" source="./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-connect-azure-account.png" alt-text="Screenshot of the sign-in to your Azure account window."::: --After you successfully sign in to your account, you see all of the Azure subscriptions associated with your account. ### Examine the output queue |
azure-functions | Functions Add Output Binding Storage Queue Vs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md | Title: Connect functions to Azure Storage using Visual Studio description: "Learn how to add an output binding to connect your C# class library functions to an Azure Storage queue using Visual Studio." Previously updated : 05/30/2021 Last updated : 01/31/2023 ms.devlang: csharp Most bindings require a stored connection string that Functions uses to access t Before you start this article, you must: -+- Complete [part 1 of the Visual Studio quickstart](./functions-create-your-first-function-visual-studio.md). +- Install [Azure Storage Explorer](https://storageexplorer.com/). Storage Explorer is a tool that you'll use to examine queue messages generated by your output binding. Storage Explorer is supported on macOS, Windows, and Linux-based operating systems. - Sign in to your Azure subscription from Visual Studio. ## Download the function app settings After the binding is defined, you can use the `name` of the binding to access it [!INCLUDE [functions-run-function-test-local-vs](../../includes/functions-run-function-test-local-vs.md)] -A new queue named `outqueue` is created in your storage account by the Functions runtime when the output binding is first used. You'll use Cloud Explorer to verify that the queue was created along with the new message. +A new queue named `outqueue` is created in your storage account by the Functions runtime when the output binding is first used. You'll use Storage Explorer to verify that the queue was created along with the new message. -## Examine the output queue +### Connect Storage Explorer to your account -1. In Visual Studio from the **View** menu, select **Cloud Explorer**. -1. In **Cloud Explorer**, expand your Azure subscription and **Storage Accounts**, then expand the storage account used by your function. If you can't remember the storage account name, check the `AzureWebJobsStorage` connection string setting in the *local.settings.json* file. +### Examine the output queue -1. Expand the **Queues** node, and then double-click the queue named **outqueue** to view the contents of the queue in Visual Studio. +1. In Storage Explorer, expand the **Queues** node, and then select the queue named **outqueue**. The queue contains the message that the queue output binding created when you ran the HTTP-triggered function. If you invoked the function with the default `name` value of *Azure*, the queue message is *Name passed to the function: Azure*. -  + :::image type="content" source="./media/functions-add-output-binding-storage-queue-vs-code/function-queue-storage-output-view-queue.png" alt-text="Screenshot of the queue message shown in Azure Storage Explorer."::: -1. Run the function again, send another request, and you'll see a new message appear in the queue. +1. Run the function again, send another request, and you see a new message in the queue. Now, it's time to republish the updated function app to Azure. |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | The maximum number of instances that the app can scale out to. Default is no lim ## WEBSITE\_NODE\_DEFAULT_VERSION _Windows only._-Sets the version of Node.js to use when running your function app on Windows. You should use a tilde (~) to have the runtime use the latest available version of the targeted major version. For example, when set to `~10`, the latest version of Node.js 10 is used. When a major version is targeted with a tilde, you don't have to manually update the minor version. +Sets the version of Node.js to use when running your function app on Windows. You should use a tilde (~) to have the runtime use the latest available version of the targeted major version. For example, when set to `~18`, the latest version of Node.js 18 is used. When a major version is targeted with a tilde, you don't have to manually update the minor version. |Key|Sample value| |||-|WEBSITE\_NODE\_DEFAULT_VERSION|`~10`| +|WEBSITE\_NODE\_DEFAULT_VERSION|`~18`| ## WEBSITE\_OVERRIDE\_STICKY\_DIAGNOSTICS\_SETTINGS |
azure-functions | Functions Infrastructure As Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md | resource functionApp 'Microsoft.Web/sites@2022-03-01' = { The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`. -The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Linux Consumption plan. +For Linux Consumption plan it is also required to add the two other settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption). |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | The following table shows current supported Node.js versions for each major vers | Functions version | Node version (Windows) | Node Version (Linux) | ||| |-| 4.x (recommended) | `~18`(preview)<br/>`~16`<br/>`~14` | `node|18`(preview)<br/>`node|16`<br/>`node|14` | +| 4.x (recommended) | `~18`<br/>`~16`<br/>`~14` | `node|18`<br/>`node|16`<br/>`node|14` | | 3.x | `~14`<br/>`~12`<br/>`~10` | `node|14`<br/>`node|12`<br/>`node|10` | | 2.x | `~12`<br/>`~10`<br/>`~8` | `node|10`<br/>`node|8` | | 1.x | 6.11.2 (locked by the runtime) | n/a | You can see the current version that the runtime is using by logging `process.ve # [Windows](#tab/windows-setting-the-node-version) -For Windows function apps, target the version in Azure by setting the `WEBSITE_NODE_DEFAULT_VERSION` [app setting](functions-how-to-use-azure-function-app-settings.md#settings) to a supported LTS version, such as `~16`. +For Windows function apps, target the version in Azure by setting the `WEBSITE_NODE_DEFAULT_VERSION` [app setting](functions-how-to-use-azure-function-app-settings.md#settings) to a supported LTS version, such as `~18`. # [Linux](#tab/linux-setting-the-node-version) For Linux function apps, run the following Azure CLI command to update the Node version. ```azurecli-az functionapp config set --linux-fx-version "node|14" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>" +az functionapp config set --linux-fx-version "node|18" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>" ``` |
azure-functions | Storage Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md | The storage account connection string must be updated when you regenerate storag It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator). In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment. -You may need to use separate store accounts to [avoid host ID collisions](#avoiding-host-id-collisions). +You may need to use separate storage accounts to [avoid host ID collisions](#avoiding-host-id-collisions). ### Lifecycle management policy considerations |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure NetApp Files](../../azure-netapp-files/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Azure Policy](../../governance/policy/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | ✅ | ✅ | ✅ | ✅ | |+| [Azure Red Hat OpenShift](../../openshift/index.yml) | ✅ | ✅ | | | | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | ✅ | ✅ | ✅ | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Virtual Network](../../virtual-network/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Virtual WAN](../../virtual-wan/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ |+| [VM Image Builder](../../virtual-machines/image-builder-overview.md) | ✅ | ✅ | | | | | [VPN Gateway](../../vpn-gateway/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Web Application Firewall](../../web-application-firewall/index.yml) | ✅ | ✅ | ✅ | ✅ | | |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to update to the latest version at all times, or opt in ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| Jan 2022 | <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0.0 | None | +| Jan 2023 | <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0.0 | None | | Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0.0 | None | | Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | | Sep 2022 | Reliability improvements | 1.9.0.0 | None | |
azure-monitor | Availability Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md | Title: Set up availability alerts with Application Insights - Azure Monitor | Microsoft Docs + Title: Set up availability alerts with Application Insights description: Learn how to set up web tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Last updated 12/20/2022-[Azure Application Insights](app-insights-overview.md) Availability tests send web requests to your application at regular intervals from points around the world. It can alert you if your application isn't responding, or if it responds too slowly. +[Application Insights](app-insights-overview.md) availability tests send web requests to your application at regular intervals from points around the world. You can receive alerts if your application isn't responding or if it responds too slowly. ## Enable alerts -Alerts are now automatically enabled by default, but in order to fully configure the alert you first have to initially create your availability test. +Alerts are now automatically enabled by default, but to fully configure an alert, you must initially create your availability test. > [!NOTE]-> With the [new unified alerts](../alerts/alerts-overview.md), the alert rule severity and notification preferences with [action groups](../alerts/action-groups.md) **must be** configured in the alerts experience. Without the following steps, you will only receive in-portal notifications. +> With the [new unified alerts](../alerts/alerts-overview.md), the alert rule severity and notification preferences with [action groups](../alerts/action-groups.md) *must be* configured in the alerts experience. Without the following steps, you'll only receive in-portal notifications. -1. After you save the availability test, on the details tab click on the ellipsis by the test you just made. Click on "Open Rules (Alerts) page". +1. After you save the availability test, on the **Details** tab, select the ellipsis by the test you made. Select **Open Rules (Alerts) page**. - :::image type="content" source="./media/availability-alerts/edit-alert.png" alt-text="Screenshot of the Availability pane for an Application Insights resource in the Azure portal. The Open Rules (Alerts) page menu option is highlighted." lightbox="./media/availability-alerts/edit-alert.png"::: + :::image type="content" source="./media/availability-alerts/edit-alert.png" alt-text="Screenshot that shows the Availability pane for an Application Insights resource in the Azure portal and the Open Rules (Alerts) page menu option." lightbox="./media/availability-alerts/edit-alert.png"::: -2. Set the desired severity level, rule description and most importantly - the action group that has the notification preferences you would like to use for this alert rule. +1. Set the severity level, rule description, and action group that have the notification preferences you want to use for this alert rule. ### Alert criteria -Automatically enabled Availability alerts will trigger an email when the endpoint you have defined is unavailable, and when it is available again. Availability alerts which are created through this experience are state-based. When the alert criteria are met, a single alert gets generated when the website is detected as unavailable. If the website is still down the next time the alert criteria is evaluated, it will not generate a new alert. +Automatically enabled availability alerts trigger an email when the endpoint you've defined is unavailable and when it's available again. Availability alerts that are created through this experience are state based. When the alert criteria are met, a single alert gets generated when the website is detected as unavailable. If the website is still down the next time the alert criteria is evaluated, it won't generate a new alert. -For example, if your website is down for an hour and you have set up an e-mail alert with an evaluation frequency of 15 minutes, you will only receive an e-mail when the website goes down, and a subsequent e-mail when it is back up. You will not receive continuous alerts every 15 minutes reminding you that the website is still unavailable. +For example, suppose that your website is down for an hour and you've set up an email alert with an evaluation frequency of 15 minutes. You'll only receive an email when the website goes down and another email when it's back up. You won't receive continuous alerts every 15 minutes to remind you that the website is still unavailable. -If you don't want to receive notifications when your website is down for only a short period of time (e.g. during maintenance) you can change the evaluation frequency to a higher value than the expected downtime, up to 15 minutes. You can also increase the alert location threshold, so it only triggers an alert if the website is down for a certain number of regions. For longer scheduled downtimes, we recommend temporarily deactivating the alert rule or creating a custom rule. This will give you more options to account for the downtime. +You might not want to receive notifications when your website is down for only a short period of time, for example, during maintenance. You can change the evaluation frequency to a higher value than the expected downtime, up to 15 minutes. You can also increase the alert location threshold so that it only triggers an alert if the website is down for a specific number of regions. For longer scheduled downtimes, temporarily deactivate the alert rule or create a custom rule. It gives you more options to account for the downtime. #### Change the alert criteria -To make changes to location threshold, aggregation period, and test frequency, select the condition on the edit page of the alert rule, which will open the **Configure signal logic** window. +To make changes to the location threshold, aggregation period, and test frequency, select the condition on the edit page of the alert rule to open the **Configure signal logic** window. ### Create a custom alert rule -If you need advanced capabilities, you can create a custom alert rule from the **Alerts** tab. Click on **Create** and select **Alert rule**. Choose **Metrics** for **Signal type** to show all available signals, and select **Availability**. +If you need advanced capabilities, you can create a custom alert rule on the **Alerts** tab. Select **Create** > **Alert rule**. Choose **Metrics** for **Signal type** to show all available signals and select **Availability**. -A custom alert rule offers higher values for aggregation period (up to 24 hours instead of 6 hours) and test frequency (up to 1 hour instead of 15 minutes). It also adds options to further define the logic by selecting different operators, aggregation types, and threshold values. +A custom alert rule offers higher values for the aggregation period (up to 24 hours instead of 6 hours) and the test frequency (up to 1 hour instead of 15 minutes). It also adds options to further define the logic by selecting different operators, aggregation types, and threshold values. -- **Alert on X out of Y locations reporting failures** The X out of Y locations alert rule is enabled by default in the [new unified alerts experience](../alerts/alerts-overview.md), when you create a new availability test. You can opt out by selecting the "classic" option or choosing to disable the alert rule. Configure the action groups to receive notifications when the alert triggers by following the steps above. Without this step, you will only receive in-portal notifications when the rule triggers.+- **Alert on X out of Y locations reporting failures**: The X out of Y locations alert rule is enabled by default in the [new unified alerts experience](../alerts/alerts-overview.md) when you create a new availability test. You can opt out by selecting the "classic" option or by choosing to disable the alert rule. Configure the action groups to receive notifications when the alert triggers by following the preceding steps. Without this step, you'll only receive in-portal notifications when the rule triggers. -- **Alert on availability metrics** Using the [new unified alerts](../alerts/alerts-overview.md), you can alert on segmented aggregate availability and test duration metrics as well:+- **Alert on availability metrics**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on segmented aggregate availability and test duration metrics too: - 1. Select an Application Insights resource in the Metrics experience, and select an Availability metric. + 1. Select an Application Insights resource in the **Metrics** experience, and select an **Availability** metric. - 2. Configure alerts option from the menu will take you to the new experience where you can select specific tests or locations to set up alert rule on. You can also configure the action groups for this alert rule here. + 1. The **Configure alerts** option from the menu takes you to the new experience where you can select specific tests or locations on which to set up alert rules. You can also configure the action groups for this alert rule here. -- **Alert on custom analytics queries** Using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-unified-log.md). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. This is also applicable if you are sending custom availability results using the TrackAvailability SDK.+- **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-unified-log.md). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK. - The metrics on availability data include any custom availability results you may be submitting by calling our TrackAvailability SDK. You can use the alerting on metrics support to alert on custom availability results. + The metrics on availability data include any custom availability results you might be submitting by calling the TrackAvailability SDK. You can use the alerting on metrics support to alert on custom availability results. ## Automate alerts -To automate this process with Azure Resource Manager templates, refer to the [Create a metric alert with Resource Manager template](../alerts/alerts-metric-create-templates.md#template-for-an-availability-test-along-with-a-metric-alert) documentation. +To automate this process with Azure Resource Manager templates, see [Create a metric alert with an Azure Resource Manager template](../alerts/alerts-metric-create-templates.md#template-for-an-availability-test-along-with-a-metric-alert). ## Troubleshooting -Dedicated [troubleshooting article](troubleshoot-availability.md). +See the dedicated [Troubleshooting article](troubleshoot-availability.md). ## Next steps |
azure-monitor | Azure Vm Vmss Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md | Title: Monitor performance on Azure VMs - Azure Application Insights -description: Application performance monitoring for Azure Virtual Machine and Azure Virtual Machine Scale Sets. +description: Application performance monitoring for Azure virtual machines and virtual machine scale sets. Last updated 01/11/2023 ms.devlang: csharp, java, javascript, python-# Application Insights for Azure VMs and Virtual Machine Scale Sets +# Application Insights for Azure VMs and virtual machine scale sets -Enabling monitoring for your ASP.NET and ASP.NET Core IIS-hosted applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) or [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code. +Enabling monitoring for your ASP.NET and ASP.NET Core IIS-hosted applications running on [Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) or [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code. -This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments. +This article walks you through enabling Application Insights monitoring by using the Application Insights Agent. It also provides preliminary guidance for automating the process for large-scale deployments. ## Enable Application Insights Auto-instrumentation is easy to enable. Advanced configuration isn't required. For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). > [!NOTE]-> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and Virtual Machine Scale Sets. +> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications, and Java. Use an SDK to instrument Node.js and Python applications hosted on Azure virtual machines and virtual machine scale sets. + ### [.NET Framework](#tab/net) -The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more. +The Application Insights Agent autocollects the same dependency signals out of the box as the SDK. To learn more, see [Dependency autocollection](./auto-collect-dependencies.md#net). ### [.NET Core / .NET](#tab/core) -The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more. +The Application Insights Agent autocollects the same dependency signals out of the box as the SDK. To learn more, see [Dependency autocollection](./auto-collect-dependencies.md#net). ### [Java](#tab/Java) -We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [other configurations](./java-standalone-config.md) +We recommend the [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests), along with many [other configurations](./java-standalone-config.md). ### [Node.js](#tab/nodejs) To monitor Python apps, use the [SDK](./opencensus-python.md). -Before installing the Application Insights Agent, you'll need a connection string. [Create a new Application Insights Resource](./create-workspace-resource.md) or copy the connection string from an existing application insights resource. +Before you install the Application Insights Agent extension, you'll need a connection string. [Create a new Application Insights resource](./create-workspace-resource.md) or copy the connection string from an existing Application Insights resource. ++### Enable monitoring for virtual machines -### Enable Monitoring for Virtual Machines +You can use the Azure portal or PowerShell to enable monitoring for VMs. -### Method 1 - Azure portal / GUI -1. Go to Azure portal and navigate to your Application Insights resource and copy your connection string to the clipboard. +#### Azure portal +1. In the Azure portal, go to your Application Insights resource. Copy your connection string to the clipboard. - :::image type="content"source="./media/azure-vm-vmss-apps/connect-string.png" alt-text="Screenshot of the connection string." lightbox="./media/azure-vm-vmss-apps/connect-string.png"::: + :::image type="content"source="./media/azure-vm-vmss-apps/connect-string.png" alt-text="Screenshot that shows the connection string." lightbox="./media/azure-vm-vmss-apps/connect-string.png"::: -2. Navigate to your virtual machine, open the "Extensions + applications" pane under the "Settings" section in the left side navigation menu, and select "+ Add" +1. Go to your virtual machine. Under the **Settings** section in the menu on the left side, select **Extensions + applications** > **Add**. - :::image type="content"source="./media/azure-vm-vmss-apps/add-extension.png" alt-text="Screenshot of the extensions pane with an add button." lightbox="media/azure-vm-vmss-apps/add-extension.png"::: + :::image type="content"source="./media/azure-vm-vmss-apps/add-extension.png" alt-text="Screenshot that shows the Extensions + applications pane with the Add button." lightbox="media/azure-vm-vmss-apps/add-extension.png"::: -3. Select the "Application Insights Agent" card, and select "Next" +1. Select **Application Insights Agent** > **Next**. - :::image type="content"source="./media/azure-vm-vmss-apps/select-extension.png" alt-text="Screenshot of the install an extension pane with a next button." lightbox="media/azure-vm-vmss-apps/select-extension.png"::: + :::image type="content"source="./media/azure-vm-vmss-apps/select-extension.png" alt-text="Screenshot that shows the Install an Extension pane with the Next button." lightbox="media/azure-vm-vmss-apps/select-extension.png"::: -4. Paste the connection string you copied at step 1 and select "Review + Create" +1. Paste the connection string you copied in step 1 and select **Review + create**. - :::image type="content"source="./media/azure-vm-vmss-apps/install-extension.png" alt-text="Screenshot of the create pane with a review and create button." lightbox="media/azure-vm-vmss-apps/install-extension.png"::: + :::image type="content"source="./media/azure-vm-vmss-apps/install-extension.png" alt-text="Screenshot that shows the Create tab with the Review + create button." lightbox="media/azure-vm-vmss-apps/install-extension.png"::: -#### Method 2 - PowerShell +#### PowerShell > [!NOTE]-> New to PowerShell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps). +> Are you new to PowerShell? Check out the [Get started guide](/powershell/azure/get-started-azureps). -Install or update the Application Insights Agent as an extension for Azure virtual machines +Install or update the Application Insights Agent as an extension for Azure virtual machines: ```powershell # define variables to match your environment before running Set-AzVMExtension -ResourceGroupName $ResourceGroup -VMName $VMName -Location $L ``` > [!NOTE]-> For more complicated at-scale deployments you can use a PowerShell loop to install or update the Application Insights Agent extension across multiple VMs. +> For more complicated at-scale deployments, you can use a PowerShell loop to install or update the Application Insights Agent extension across multiple VMs. ++Query the Application Insights Agent extension status for Azure virtual machines: -Query Application Insights Agent extension status for Azure Virtual Machine ```powershell Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoringWindows -Status ``` -Get list of installed extensions for Azure Virtual Machine +Get a list of installed extensions for Azure virtual machines: + ```powershell Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions" ```-Uninstall Application Insights Agent extension from Azure Virtual Machine ++Uninstall the Application Insights Agent extension from Azure virtual machines: + ```powershell Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring" ``` > [!NOTE]-> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple Virtual Machines, select the target Azure virtual machines under **Server Name**. It might take up to a minute for data to begin flowing. +> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple virtual machines, select the target Azure virtual machines under **Server Name**. It might take up to a minute for data to begin flowing. ++## Enable monitoring for virtual machine scale sets -## Enable Monitoring for Virtual Machine Scale Sets +You can use the Azure portal or PowerShell to enable monitoring for virtual machine scale sets. -### Method 1 - Azure portal / GUI -Follow the prior steps for VMs, but navigate to your Virtual Machine Scale Sets instead of your VM. +#### Azure portal +Follow the prior steps for VMs, but go to your virtual machine scale sets instead of your VM. ++#### PowerShell +Install or update Application Insights Agent as an extension for virtual machine scale sets: -### Method 2 - PowerShell -Install or update the Application Insights Agent as an extension for Azure Virtual Machine Scale Set ```powershell-# set resource group, vmss name, and connection string to reflect your enivornment +# Set resource group, vmss name, and connection string to reflect your environment $ResourceGroup = "<myVmResourceGroup>" $VMSSName = "<myVmName>" $ConnectionString = "<myAppInsightsResourceConnectionString>" Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss # Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance ``` -Get list of installed extensions for Azure Virtual Machine Scale Sets +Get a list of installed extensions for virtual machine scale sets: + ```powershell Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions" ``` -Uninstall application monitoring extension from Azure Virtual Machine Scale Sets +Uninstall the application monitoring extension from virtual machine scale sets: + ```powershell # set resource group and vmss name to reflect your environment $vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>" Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -Virtu ## Troubleshooting -Find troubleshooting tips for the Application Insights Monitoring Agent extension for .NET applications running on Azure virtual machines and Virtual Machine Scale Sets. +Find troubleshooting tips for the Application Insights Monitoring Agent extension for .NET applications running on Azure virtual machines and virtual machine scale sets. ++If you're having trouble deploying the extension, review the execution output that's logged to files found in the following directories: -If you are having trouble deploying the extension, then review execution output which is logged to files found in the following directories: ```Windows C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWindows\<version>\ ```-If your extension has deployed successfully but you're unable to see telemetry, it could be one of the following issues covered in [Agent Troubleshooting](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot#known-issues). ++If your extension deployed successfully but you're unable to see telemetry, it could be one of the following issues covered in [Agent troubleshooting](/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot#known-issues): + - Conflicting DLLs in an app's bin directory - Conflict with IIS shared configuration If your extension has deployed successfully but you're unable to see telemetry, ### 2.8.42 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1 - red field.+Updated Application Insights .NET/.NET Core SDK to 2.18.1 - red field. ### 2.8.41 -- Added ASP.NET Core auto-instrumentation feature.+Added the ASP.NET Core auto-instrumentation feature. ## Next steps-* Learn how to [deploy an application to an Azure Virtual Machine Scale Set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md). -* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down. +* Learn how to [deploy an application to an Azure virtual machine scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md). +* [Set up availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down. |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | Title: Monitor your apps without code changes - auto-instrumentation for Azure Monitor Application Insights | Microsoft Docs -description: Overview of auto-instrumentation for Azure Monitor Application Insights - codeless application performance management + Title: Auto-instrumentation for Azure Monitor Application Insights +description: Overview of auto-instrumentation for Azure Monitor Application Insights codeless application performance management. Last updated 01/06/2023 -Auto-instrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model.md) (metrics, requests and dependencies) available in your [Application Insights resource](create-workspace-resource.md). +Auto-instrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). > [!div class="checklist"]-> - No code changes required -> - [SDK update](sdk-support-guidance.md) overhead is eliminated -> - Recommended when available +> - No code changes are required. +> - [SDK update](sdk-support-guidance.md) overhead is eliminated. +> - Recommended when available. ## Supported environments, languages, and resource providers -The table below displays the current state of auto-instrumentation availability. +The following table shows the current state of auto-instrumentation availability. -Links are provided to additional information for each supported scenario. +Links are provided to more information for each supported scenario. -|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | +|Environment/Resource provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | |-|||-|-|--| |Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: | |Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: | Links are provided to additional information for each supported scenario. **Footnotes** - <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically.-- <a name="Preview">2</a>: This feature is in public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)+- <a name="Preview">2</a>: This feature is in public preview. See [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - <a name="Agent">3</a>: An agent must be deployed and configured. -> [!NOTE] +> [!NOTE] > Auto-instrumentation was known as "codeless attach" before October 2021. ## Next steps -* [Application Insights Overview](app-insights-overview.md) -* [Application Insights Overview dashboard](overview-dashboard.md) +* [Application Insights overview](app-insights-overview.md) +* [Application Insights overview dashboard](overview-dashboard.md) * [Application map](app-map.md) |
azure-monitor | Export Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md | Before you set up continuous export, there are some alternatives you might want * The **Export** button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet. * [Log Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results. * If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using continuous export if you've [migrated to a workspace-based resource](convert-classic-resource.md).-* The [Data Access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically. +* The [Data Access REST API](/rest/api/application-insights/) lets you access your telemetry programmatically. * You can also access setup for [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport). After continuous export copies your data to storage, where it can stay as long as you like, it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md). |
azure-monitor | Javascript React Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md | appInsights.loadAppInsights(); | Name | Default | Description | |||-|-| history | null | React router history. For more information, see the [React router package documentation](https://reactrouter.com/web/api/history). To learn how to access the history object outside of components, see the [React router FAQ](https://github.com/ReactTraining/react-router/blob/master/FAQ.md#how-do-i-access-the-history-object-outside-of-components). | +| history | null | React router history. For more information, see the [React router package documentation](https://reactrouter.com/en/main). To learn how to access the history object outside of components, see the [React router FAQ](https://github.com/ReactTraining/react-router/blob/master/FAQ.md#how-do-i-access-the-history-object-outside-of-components). | ### React components usage tracking |
azure-monitor | Autoscale Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-diagnostics.md | + + Title: Autoscale diagnostics +description: Configure diagnostics in autoscale. +++++ Last updated : 06/22/2022+++# Customer intent: As a devops admin, I want to collect and analyze autoscale metrics and logs. ++# Diagnostic settings in Autoscale ++Autoscale has two log categories and a set of metrics that can be enabled via the **Diagnostics settings** tab on the autoscale setting page. ++++The two categories are +* [Autoscale Evaluations](https://learn.microsoft.com/azure/azure-monitor/reference/tables/autoscaleevaluationslog) containing log data relating to rule evaluation. +* [Autoscale Scale Actions](https://learn.microsoft.com/azure/azure-monitor/reference/tables/autoscalescaleactionslog) log data relating to each scale event. ++Information about Autoscale Metrics can be found in the [Supported metrics](../essentials/metrics-supported.md#microsoftinsightsautoscalesettings) reference. ++Both the logs and metrics can be sent to various destinations including: +* Log Analytics workspaces +* Storage accounts +* Event hubs +* Partner solutions ++For more information on diagnostics, see [Diagnostic settings in Azure Monitor](../essentials/diagnostic-settings.md?tabs=portal) ++## Run history ++View the history of your autoscale activity in the run history tab. The run history tab includes a chart of resource instance counts over time and the resource activity log entries for autoscale. +++## Resource log schemas ++The following are the general formats for autoscale resource logs with example data included. Not all examples below are properly formed JSON as they may include a list of valid for a given field. ++Use these logs to troubleshoot issues in autoscale. For more information, see [Troubleshooting autoscale problems](autoscale-troubleshoot.md). ++## Autoscale Evaluations Log +The following schemas appear in the autoscale evaluations log. ++### Profile evaluation ++Logged when autoscale first looks at an autoscale profile ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": ["FixedDateProfileEvaluation", "RecurrentProfileEvaluation", "DefaultProfileEvaluation"], + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "profile": "defaultProfile", + "profileSelected": [true, false] + } +} +``` ++### Profile cooldown evaluation ++Logged when autoscale evaluates if it shouldn't scale because of a cool down period. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "ScaleRuleCooldownEvaluation", + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "selectedProfile": "defaultProfile", + "scaleDirection": ["Increase", "Decrease"] + "lastScaleActionTime": "2018-09-10 18:08:00.6132593", + "cooldown": "00:30:00", + "evaluationTime": "2018-09-10 18:11:00.6132593", + "skipRuleEvaluationForCooldown": true + } +} +``` ++### Rule evaluation ++Logged when autoscale first starts evaluating a particular scale rule. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "ScaleRuleEvaluation", + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "metricName": "Percentage CPU", + "metricNamespace": "", + "timeGrain": "00:01:00", + "timeGrainStatistic": "Average", + "timeWindow": "00:10:00", + "timeAggregationType": "Average", + "operator": "GreaterThan", + "threshold": 70, + "observedValue": 25, + "estimateScaleResult": ["Triggered", "NotTriggered", "Unknown"] + } +} +``` ++### Metric evaluation ++Logged when autoscale evaluated the metric being used to trigger a scale action. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "MetricEvaluation", + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "metricName": "Percentage CPU", + "metricNamespace": "", + "timeGrain": "00:01:00", + "timeGrainStatistic": "Average", + "startTime": "2018-09-10 18:00:00.43833793", + "endTime": "2018-09-10 18:10:00.43833793", + "data": [0.33333333333333331,0.16666666666666666,1.0,0.33333333333333331,2.0,0.16666666666666666,9.5] + } +} +``` ++### Instance count evaluation ++Logged when autoscale evaluates the number of instances already running in preparation for deciding if it should start more, shut down some, or do nothing. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "InstanceCountEvaluation", + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "currentInstanceCount": 20, + "minimumInstanceCount": 15, + "maximumInstanceCount": 30, + "defaultInstanceCount": 20 + } +} +``` ++### Scale action evaluation ++Logged when autoscale starts evaluation if a scale action should take place. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "ScaleActionOperationEvaluation", + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "lastScaleActionOperationId": "378ejr-7yye-892d-17dd-92ndijfe1738", + "lastScaleActionOperationStatus": ["InProgress", "Timeout"] + "skipCurrentAutoscaleEvaluation": [true, false] + } +} +``` ++### Instance update evaluation ++Logged when autoscale updates the number of compute instances running, either up or down. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "InstanceUpdateEvaluation", + "category": "AutoscaleEvaluations", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "currentInstanceCount": 20, + "newInstanceCount": 21, + "shouldUpdateInstance": [true, false], + "reason": ["Scale down action triggered", "Scale up to default instance count", ...] + } +} +``` ++## Autoscale Scale Actions Log ++The following schemas appear in the autoscale evaluations log. +++### Scale action ++Logged when autoscale initiates a scale action, either up or down. +```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "InstanceScaleAction", + "category": "AutoscaleScaleActions", + "resultType": ["Succeeded", "InProgress", "Failed"], + "resultDescription": ["Create async operation job failed", ...] + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "currentInstanceCount": 20, + "newInstanceCount": 21, + "scaleDirection": ["Increase", "Decrease"], + ["createdAsyncScaleActionJob": [true, false],] + ["createdAsyncScaleActionJobId": "378ejr-7yye-892d-17dd-92ndijfe1738",] + } +} +``` ++### Scale action tracking ++Logged at different intervals of an instance scale action. ++```JSON +{ + "time": "2018-09-10 18:12:00.6132593", + "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", + "operationName": "InstanceScaleAction", + "category": "AutoscaleScaleActions", + "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", + "property": { + "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", + "scaleActionOperationId": "378ejr-7yye-892d-17dd-92ndijfe1738", + "scaleActionOperationStatus": ["InProgress", "Timeout", "Canceled", ...], + "scaleActionMessage": ["Scale action is inprogress", ...] + } +} +``` ++## Activity Logs +The following events are logged to the Activity log with a `CategoryValue` of `Autoscale`. ++* Autoscale scale up initiated +* Autoscale scale up completed +* Autoscale scale down initiated +* Autoscale scale down completed +* Predictive Autoscale scale up initiated +* Predictive Autoscale scale up completed +* Metric Failure +* Metric Recovery +* Predictive Metric Failure +* Flapping ++An extract of each log event name, showing the relevant parts of the `Properties` element are shown below: ++### Autoscale action ++Logged when autoscale attempts to scale in or out. ++```JSON +{ + "eventCategory": "Autoscale", + "eventName": "AutoscaleAction", + ... + "eventProperties": "{ + "Description": "The autoscale engine attempting to scale resource '/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan' from 2 instances count to 1 instancescount.", + "ResourceName": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan", + "OldInstancesCount": 2, + "NewInstancesCount": 1, + "ActiveAutoscaleProfile": { + "Name": "Default scale condition", + "Capacity": { + "Minimum": "1", + "Maximum": "5", + "Default": "1" + }, + "Rules": [ + { + "MetricTrigger": { + "Name": "CpuPercentage", + "Namespace": "microsoft.web/serverfarms", + "Resource": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan", + "ResourceLocation": "West Central US", + "TimeGrain": "PT1M", + "Statistic": "Average", + "TimeWindow": "PT2M", + "TimeAggregation": "Average", + "Operator": "GreaterThan", + "Threshold": 40.0, + "Source": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan", + "MetricType": "MDM", + "Dimensions": [], + "DividePerInstance": false + }, + "ScaleAction": { + "Direction": "Increase", + "Type": "ChangeCount", + "Value": "1", + "Cooldown": "PT3M" + } + }, + { + "MetricTrigger": { + "Name": "CpuPercentage", + "Namespace": "microsoft.web/serverfarms", + "Resource": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan", + "ResourceLocation": "West Central US", + "TimeGrain": "PT1M", + "Statistic": "Average", + "TimeWindow": "PT5M", + "TimeAggregation": "Average", + "Operator": "LessThanOrEqual", + "Threshold": 30.0, + "Source": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan", + "MetricType": "MDM", + "Dimensions": [], + "DividePerInstance": false + }, + "ScaleAction": { + "Direction": "Decrease", + "Type": "ExactCount", + "Value": "1", + "Cooldown": "PT5M" + } + } + ] + }, + "LastScaleActionTime": "Thu, 26 Jan 2023 12:57:14 GMT" + }", + ... + "activityStatusValue": "Succeeded" +} ++``` ++### Get Operation Status Result ++Logged following a scale event. ++```JSON ++"Properties":{ + "eventCategory": "Autoscale", + "eventName": "GetOperationStatusResult", + ... + "eventProperties": "{"OldInstancesCount":3,"NewInstancesCount":2}", + ... + "activityStatusValue": "Succeeded" +} ++``` ++### Metric failure ++Logged when autoscale can't determine the value of the metric used in the scale rule. ++```JSON +"Properties":{ + "eventCategory": "Autoscale", + "eventName": "MetricFailure", + ... + "eventProperties": "{ + "Notes":"To ensure service availability, Autoscale will scale out the resource to the default capacity if it is greater than the current capacity}", + ... + "activityStatusValue": "Failed" +} +``` +### Metric recovery ++Logged when autoscale can once again determine the value of the metric used in the scale rule after a `MetricFailure` event ++```JSON +"Properties":{ + "eventCategory": "Autoscale", + "eventName": "MetricRecovery", + ... + "eventProperties": "{}", + ... + "activityStatusValue": "Succeeded" +} +``` +### Predictive Metric Failure ++Logged when autoscale can't calculate predicted scale events due to the metric being unavailable. +```JSON +"Properties": { + "eventCategory": "Autoscale", + "eventName": "PredictiveMetricFailure", + ... + "eventProperties": "{ + "Notes": "To ensure service availability, Autoscale will scale out the resource to the default capacity if it is greater than the current capacity" + }", + ... + "activityStatusValue": "Failed" +} +``` +### Flapping Occurred ++Logged when autoscale detects flapping could occur, and scales differently to avoid it. ++```JSON +"Properties":{ + "eventCategory": "Autoscale", + "eventName": "FlappingOccurred", + ... + "eventProperties": + "{"Description":"Scale down will occur with updated instance count to avoid flapping. + Resource: '/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan'. + Current instance count: '6', + Intended new instance count: '1'. + Actual new instance count: '4'", + "ResourceName":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan", + "OldInstancesCount":6, + "NewInstancesCount":4, + "ActiveAutoscaleProfile":{"Name":"Auto created scale condition", + "Capacity":{"Minimum":"1","Maximum":"30","Default":"1"}, + "Rules":[{"MetricTrigger":{"Name":"Requests","Namespace":"microsoft.web/sites","Resource":"/subscriptions/ d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1", "ResourceLocation":"West Central US","TimeGrain":"PT1M","Statistic":"Average","TimeWindow":"PT1M","TimeAggregation":"Maximum", "Operator":"GreaterThanOrEqual","Threshold":3.0,"Source":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","MetricType":"MDM","Dimensions":[],"DividePerInstance":true}, "ScaleAction":{"Direction":"Increase","Type":"ChangeCount","Value":"10","Cooldown":"PT1M"}},{"MetricTrigger":{"Name":"Requests", "Namespace":"microsoft.web/sites","Resource":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/ providers/Microsoft.Web/sites/ScaleableWebApp1","ResourceLocation":"West Central US","TimeGrain":"PT1M","Statistic":"Max", "TimeWindow":"PT1M","TimeAggregation":"Maximum","Operator":"LessThan","Threshold":3.0,"Source":"/subscriptions/ d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","MetricType":"MDM", "Dimensions":[],"DividePerInstance":true},"ScaleAction":{"Direction":"Decrease","Type":"ChangeCount","Value":"5", "Cooldown":"PT1M"}}]}}", + ... + "activityStatusValue": "Succeeded" +} +``` ++### Flapping ++Logged when autoscale detects flapping could occur, and defers scaling in to avoid it. ++```JSON +"Properties": { + "eventCategory": "Autoscale", + "eventName": "Flapping", + "Description": "{"Cannot scale down due to flapping observed. Resource: '/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/rg-001/providers/Microsoft.Compute/virtualMachineScaleSets/mac2'. Current instance count: '2', Intended new instance count '1'", + "ResourceName": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/rg-001/providers/Microsoft.Compute/virtualMachineScaleSets/mac2", + "OldInstancesCount": "2", + "NewInstancesCount": "2", + "ActiveAutoscaleProfile": "ActiveAutoscaleProfile": { + "Name": "Auto created default scale condition", + "Capacity": { + "Minimum": "1", + "Maximum": "2", + "Default": "1" + }, + "Rules": [ + { + "MetricTrigger": { + "Name": "StorageSuccesses", + "Namespace": "monitoringbackgroundjob", + "Resource": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/microsoft.monitor/accounts/MACAzureInsightsPROD", + "ResourceLocation": "EastUS2", + "TimeGrain": "PT1M", + "Statistic": "Average", + "TimeWindow": "PT10M", + "TimeAggregation": "Average", + "Operator": "LessThan", + "Threshold": 600.0, + "Source": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/microsoft.monitor/accounts/MACAzureInsightsPROD", + "MetricType": "MDM", + "Dimensions": [], + "DividePerInstance": false + }, + "ScaleAction": { + "Direction": "Decrease", + "Type": "ChangeCount", + "Value": "1", + "Cooldown": "PT5M" + } + }, + { + "MetricTrigger": { + "Name": "TimeToStartupInMs", + "Namespace": "armrpclient", + "Resource": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-123/providers/microsoft.monitor/accounts/MACMetricsRP", + "ResourceLocation": "eastus2", + "TimeGrain": "PT1M", + "Statistic": "Percentile99th", + "TimeWindow": "PT10M", + "TimeAggregation": "Average", + "Operator": "GreaterThan", + "Threshold": 70.0, + "Source": "/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-123/providers/microsoft.monitor/accounts/MACMetricsRP", + "MetricType": "MDM", + "Dimensions": [], + "DividePerInstance": false + }, + "ScaleAction": { + "Direction": "Increase", + "Type": "ChangeCount", + "Value": "1", + "Cooldown": "PT5M" + } + } + ] + }" +}... +``` ++## Next steps ++* [Troubleshooting Autoscale](./autoscale-troubleshoot.md) +* [Autoscale Flapping](./autoscale-flapping.md) +* [Autoscale settings](./autoscale-understanding-settings.md) |
azure-monitor | Autoscale Resource Log Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-resource-log-schema.md | - Title: Azure autoscale log events schema -description: Format of logs for monitoring and troubleshooting autoscale actions --- Previously updated : 11/14/2019-----# Azure Monitor autoscale actions resource log schema --Following are the general formats for autoscale resource logs with example data included. Not all examples below are properly formed JSON because they may include multiple values that could be valid for a given field. --Use events of this type to troubleshoot problems you may be having with autoscale. For more information, see [Troubleshooting autoscale problems](autoscale-troubleshoot.md). ---## Profile evaluation --Recorded when autoscale first looks at an autoscale profile --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": ["FixedDateProfileEvaluation", "RecurrentProfileEvaluation", "DefaultProfileEvaluation"], - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "profile": "defaultProfile", - "profileSelected": [true, false] - } -} -``` --## Profile cooldown evaluation --Recorded when autoscale evaluates if it should not do a scale because of a cool down period. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "ScaleRuleCooldownEvaluation", - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "selectedProfile": "defaultProfile", - "scaleDirection": ["Increase", "Decrease"] - "lastScaleActionTime": "2018-09-10 18:08:00.6132593", - "cooldown": "00:30:00", - "evaluationTime": "2018-09-10 18:11:00.6132593", - "skipRuleEvaluationForCooldown": true - } -} -``` --## Rule evaluation --Recorded when autoscale first starts evaluating a particular scale rule. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "ScaleRuleEvaluation", - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "metricName": "Percentage CPU", - "metricNamespace": "", - "timeGrain": "00:01:00", - "timeGrainStatistic": "Average", - "timeWindow": "00:10:00", - "timeAggregationType": "Average", - "operator": "GreaterThan", - "threshold": 70, - "observedValue": 25, - "estimateScaleResult": ["Triggered", "NotTriggered", "Unknown"] - } -} -``` --## Metric evaluation --Recorded when autoscale evaluated the metric being used to trigger a scale action. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "MetricEvaluation", - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "metricName": "Percentage CPU", - "metricNamespace": "", - "timeGrain": "00:01:00", - "timeGrainStatistic": "Average", - "startTime": "2018-09-10 18:00:00.43833793", - "endTime": "2018-09-10 18:10:00.43833793", - "data": [0.33333333333333331,0.16666666666666666,1.0,0.33333333333333331,2.0,0.16666666666666666,9.5] - } -} -``` --## Instance count evaluation --Recorded when autoscale evaluates the number of instances already running in preparation for deciding if it should start more, shut down some, or do nothing. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "InstanceCountEvaluation", - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "currentInstanceCount": 20, - "minimumInstanceCount": 15, - "maximumInstanceCount": 30, - "defaultInstanceCount": 20 - } -} -``` --## Scale action evaluation --Recorded when autoscale starts evaluation if a scale action should take place. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "ScaleActionOperationEvaluation", - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "lastScaleActionOperationId": "378ejr-7yye-892d-17dd-92ndijfe1738", - "lastScaleActionOperationStatus": ["InProgress", "Timeout"] - "skipCurrentAutoscaleEvaluation": [true, false] - } -} -``` --## Instance update evaluation --Recorded when autoscale updates the number of compute instances running, either up or down. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "InstanceUpdateEvaluation", - "category": "AutoscaleEvaluations", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "currentInstanceCount": 20, - "newInstanceCount": 21, - "shouldUpdateInstance": [true, false], - "reason": ["Scale down action triggered", "Scale up to default instance count", ...] - } -} -``` --## Scale action --Recorded when autoscale initiates a scale action, either up or down. -```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "InstanceScaleAction", - "category": "AutoscaleScaleActions", - "resultType": ["Succeeded", "InProgress", "Failed"], - "resultDescription": ["Create async operation job failed", ...] - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "currentInstanceCount": 20, - "newInstanceCount": 21, - "scaleDirection": ["Increase", "Decrease"], - ["createdAsyncScaleActionJob": [true, false],] - ["createdAsyncScaleActionJobId": "378ejr-7yye-892d-17dd-92ndijfe1738",] - } -} -``` --## Scale action tracking --Recorded at different intervals of an instance scale action. --```json -{ - "time": "2018-09-10 18:12:00.6132593", - "resourceId": "/SUBSCRIPTIONS/BA13C41D-C957-4774-8A37-092D62ACFC85/RESOURCEGROUPS/AUTOSCALETRACKING12042017/PROVIDERS/MICROSOFT.INSIGHTS/AUTOSCALESETTINGS/DEFAULTSETTING", - "operationName": "InstanceScaleAction", - "category": "AutoscaleScaleActions", - "correlationId": "e8f67045-f381-445d-bc2d-eeff81ec0d77", - "property": { - "targetResourceId": "/subscriptions/d45c994a-809b-4cb3-a952-e75f8c488d23/resourceGroups/RingAhoy/providers/Microsoft.Web/serverfarms/ringahoy", - "scaleActionOperationId": "378ejr-7yye-892d-17dd-92ndijfe1738", - "scaleActionOperationStatus": ["InProgress", "Timeout", "Canceled", ...], - "scaleActionMessage": ["Scale action is inprogress", ...] - } -} -``` --## Next steps -Learn about [autoscale](autoscale-overview.md) |
azure-monitor | Best Practices Multicloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-multicloud.md | + + Title: Multicloud monitoring with Azure Monitor +description: Guidance and recommendations for using Azure Monitor to monitor resources and applications in other clouds. + Last updated : 01/31/2023+++++# Multicloud monitoring with Azure Monitor +In addition to monitoring services and application in Azure, Azure Monitor can provide complete monitoring for your resources and applications running in other clouds including Amazon Web Services (AWS) and Google Cloud Platform (GCP). This article describes features of Azure Monitor that allow you to provide complete monitoring across your AWS and GCP environments. ++## Virtual machines +[VM insights](vm/vminsights-overview.md) in Azure Monitor uses [Azure Arc-enabled servers](../azure-arc/servers/overview.md) to provide a consistent experience between both Azure virtual machines and your AWS EC2 or GCP VM instances. You can view your hybrid machines right alongside your Azure machines and onboard them using identical methods. This includes using standard Azure constructs such as Azure Policy and applying tags. ++The [Azure Monitor agent](agents/agents-overview.md) installed by VM insights collects telemetry from the client operating system of virtual machines regardless of their location. Use the same [data collection rules](essentials/data-collection-rule-overview.md) that define your data collection across all of the virtual machines across your different cloud environments. ++- [Plan and deploy Azure Arc-enabled servers](../azure-arc/servers/plan-at-scale-deployment.md) +- [Manage Azure Monitor Agent](agents/azure-monitor-agent-manage.md) +- [Enable VM insights overview](vm/vminsights-enable-overview.md) ++If you use Defender for Cloud for security management and threat detection, then you can use auto provisioning to automate the deployment of the Azure Arc agent to your AWS EC2 and GCP VM instances. ++- [Connect your AWS accounts to Microsoft Defender for Cloud](../defender-for-cloud/quickstart-onboard-aws.md) +- [Connect your GCP projects to Microsoft Defender for Cloud](../defender-for-cloud/quickstart-onboard-gcp.md) ++## Kubernetes +[Container insights](containers/container-insights-overview.md) in Azure Monitor uses [Azure Arc-enabled Kubernetes](../azure-arc/servers/overview.md) to provide a consistent experience between both [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) and Kubernetes clusters in your AWS EKS or GCP GKE instances. You can view your hybrid clusters right alongside your Azure machines and onboard them using identical methods. This includes using standard Azure constructs such as Azure Policy and applying tags. ++The [Azure Monitor agent](agents/agents-overview.md) installed by Container insights collects telemetry from the client operating system of clusters regardless of their location. Use the same analysis tools on Container insights to monitor clusters across your different cloud environments. ++- [Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) +- [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](containers/container-insights-enable-arc-enabled-clusters.md) +- [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](../aks/monitor-aks.md) ++## Applications +Applications hosted outside of Azure must be hard coded to send telemetry to [Azure Monitor Application Insights](app/app-insights-overview.md) using SDKs for [supported languages](app/app-insights-overview.md#supported-languages). Annual code maintenance should be planned to upgrade the SDKs per [Application Insights SDK support guidance](app/sdk-support-guidance.md). ++- If you use [Grafana](https://grafana.com/grafana/) for visualization of monitoring data across your different clouds. use the [Azure Monitor data source](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to include application log and metric data in your dashboards. +- If you use [Data Dog](https://www.datadoghq.com/), use [Azure integrations](https://www.datadoghq.com/blog/azure-monitoring-enhancements/) to include application log and metric data in your Data Dog UI. +++## Audit +In addition to monitoring the health of your cloud resources, you can consolidate auditing data from your AWS and GCP clouds into your Log Analytics workspace so that you can consolidate your analysis and reporting. This is best performed by Azure Sentinel which uses the same workspace as Azure Monitor and provides additional features for collecting and analyzing security and auditing data. ++Use the following methods to ingest AWS service log data into Microsoft Sentinel. ++- [Microsoft Sentinel connector](../sentinel/connect-aws.md) +- [Azure function](https://github.com/andedevsecops/AWS-CloudTrail-AzFunc) +- [AWS Lambda function](https://github.com/andedevsecops/aws-data-connector-az-sentinel) +++Use the following methods to use a plugin to collect events, including pub/sub events, stored in GCP Cloud Storage, and then ingest into Log Analytics. ++- [Google Cloud Storage Input Plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-google_cloud_storage.html) +- [GCP Cloud Functions](https://github.com/andedevsecops/azure-sentinel-gcp-data-connector) +- [Google_pubsub input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-google_pubsub.html#plugins-inputs-google_pubsub) +- [Azure Log Analytics output plugin for Logstash](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics) +++## Custom data sources +Use the following methods to collect data from your cloud resources that doesn't fit into standard collection methods. ++- Send custom log data from any REST API client with the [Logs Ingestion API in Azure Monitor](logs/logs-ingestion-api-overview.md) +- Use Logstash to collect data and the [Azure Log Analytics output plugin for Logstash](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics) to ingest it into a Log Analytics workspace. ++## Automation +[Azure Automation](../automation/overview.md) delivers cloud-based automation, operating system updates, and configuration services that support consistent management across your Azure and non-Azure environments. It includes process automation, configuration management, update management, shared capabilities, and heterogeneous features. [Hybrid Runbook Worker](../automation/automation-hybrid-runbook-worker.md) enables automation runbooks to run directly on the non-Azure virtual machines against resources in the environment to manage those local resources. ++Through [Arc-enabled servers](../azure-arc/servers/overview.md), Azure Automation provides a consistent deployment and management experience for your non-Azure machines. It enables integration with the Automation service using the VM extension framework to deploy the Hybrid Runbook Worker role, and simplify onboarding to Update Management and Change Tracking and Inventory. + |
azure-monitor | Container Insights Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md | + + Title: Syslog collection with Container Insights +description: This article describes how to collect Syslog from AKS nodes using Container insights. + Last updated : 01/31/2023++++# Syslog collection with Container Insights (preview) ++>[!NOTE] +> During the ongoing public preview, only command line onboarding is available. Portal onboarding is not available and will be added in March 2023. ++Container Insights offers the ability to collect Syslog events from Linux nodes in your [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. Customers can use Syslog for monitoring security and health events, typically by ingesting syslog into SIEM systems like [Microsoft Sentinel](https://azure.microsoft.com/products/microsoft-sentinel/#overview). ++## Prerequisites ++- You will need to have managed identity authentication enabled on your cluster. To enable, see [migrate your AKS cluster to managed identity authentication](container-insights-enable-existing-clusters.md?tabs=azure-cli#migrate-to-managed-identity-authentication). Note: This which will create a Data Collection Rule (DCR) named `MSCI-<WorkspaceRegion>-<ClusterName>` +- Minimum versions of Azure components + - **Azure CLI**: Minimum version required for Azure CLI is [2.44.1 (link to release notes)](/cli/azure/release-notes-azure-cli#january-11-2023). See [How to update the Azure CLI](/cli/azure/update-azure-cli) for upgrade instructions. + - **Azure CLI AKS-Preview Extension**: Minimum version required for AKS-Preview Azure CLI extension is [ 0.5.125 (link to release notes)](https://github.com/Azure/azure-cli-extensions/blob/main/src/aks-preview/HISTORY.rst#05125). See [How to update extensions](/cli/azure/azure-cli-extensions-overview#how-to-update-extensions) for upgrade guidance. + - **Linux image version**: Minimum version for AKS node linux image is 2022.11.01. See [Upgrade Azure Kubernetes Service (AKS) node images](https://learn.microsoft.com/azure/aks/node-image-upgrade) for upgrade help. ++## How to enable Syslog + +Use the following command in Azure CLI to enable syslog collection when you create a new AKS cluster. ++```azurecli +az aks create -g syslog-rg -n new-cluster --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --enable-syslog --generate-ssh-key +``` + +Use the following command in Azure CLI to enable syslog collection on an existing AKS cluster. ++```azurecli +az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring --enable-syslog -g syslog-rg -n existing-cluster` +``` +++## How to access Syslog data + +Syslog data is stored in the [Syslog](/azure/azure-monitor/reference/tables/syslog) table in your Log Analytics workspace. You can create your own [log queries](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-overview.md) to analyze this data or use any of the [prebuilt queries](../logs/log-query-overview.md). +++You can open Log Analytics from the **Logs** menu in the **Monitor** menu to access Syslog data for all clusters or from the AKs cluster's menu to access Syslog data for only that cluster. + + +### Sample queries + +The following table provides different examples of log queries that retrieve Syslog records. ++| Query | Description | +|: |: | +| `Syslog` |All Syslogs | +| `Syslog | where SeverityLevel == "error"` |All Syslog records with severity of error | +| `Syslog | summarize AggregatedValue = count() by Computer` |Count of Syslog records by computer | +| `Syslog | summarize AggregatedValue = count() by Facility` |Count of Syslog records by facility | ++## Editing your Syslog collection settings ++To modify the configuration for your Syslog collection, you modify the [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that was created when you enabled it. ++Select **Data Collection Rules** from the **Monitor** menu in the Azure portal. +++Select your DCR and then **View data sources**. Select the **Linux Syslog** data source to view the Syslog collection details. +>[!NOTE] +> A DCR is created automatically when you enable syslog. The DCR follows the naming convention `MSCI-<WorkspaceRegion>-<ClusterName>`. +++Select the minimum log level for each facility that you want to collect. ++++## Known limitations ++- **Onboarding**. Syslog collection can only be enabled from command line during public preview. +- **Container restart data loss**. Agent Container restarts can lead to syslog data loss during public preview. ++## Next steps ++- Read more about [Syslog record properties](/azure/azure-monitor/reference/tables/syslog) ++ |
azure-monitor | Access Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md | Use the `https://api.loganalytics.azure.com` endpoint. ##### Client Credentials Token URL (POST request) ```http- POST /<your-tenant-id>/oauth2/v2.0/token + POST /<your-tenant-id>/oauth2/token Host: https://login.microsoftonline.com Content-Type: application/x-www-form-urlencoded |
azure-monitor | Log Analytics Workspace Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-insights-overview.md | Title: Log Analytics Workspace Insights -description: An overview of Log Analytics Workspace Insights - ingestion, usage, health, agents and more +description: An overview of Log Analytics Workspace Insights usage, performance, health, agents, queries, and change log. Last updated 06/27/2022 # Log Analytics Workspace Insights -Log Analytics Workspace Insights provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights. +Log Analytics Workspace Insights provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article helps you understand how to onboard and use Log Analytics Workspace Insights. -## Overview your Log Analytics workspaces +## Overview of your Log Analytics workspaces -When accessing Log Analytics Workspace Insights through the Azure Monitor Insights, the 'At scale' perspective is shown. Here you can see how your workspaces are spread across the globe, review their retention, capping and license details (color coded), and choose a workspace to see its insights. +When you access Log Analytics Workspace Insights through Azure Monitor Insights, the **At scale** perspective is shown. Here you can: +- See how your workspaces are spread across the globe. +- Review their retention. +- See color-coded capping and license details. +- Choose a workspace to see its insights. +To start Log Analytics Workspace Insights at scale: -To launch Log Analytics Workspace Insights at scale, perform the following steps: +1. Sign in to the [Azure portal](https://portal.azure.com/). -1. Sign into the [Azure portal](https://portal.azure.com/) +1. Select **Monitor** from the left pane in the Azure portal. Under the **Insights Hub** section, select **Log Analytics Workspace Insights**. -2. Select **Monitor** from the left-hand pane in the Azure portal, and under the Insights Hub section, select **Log Analytics Workspace Insights**. +## View insights for a Log Analytics workspace -## View Insights for a Log Analytics workspace +You can use insights in the context of a specific workspace to display rich data and analytics of the workspace performance, usage, health, agents, queries, and changes. -Launching Insights in the context of a specific workspace displays rich data and analytics of the workspace performance, usage, health, agents, queries, and change log. -- To access Log Analytics Workspace Insights: -1. Open Log Analytics Workspace Insights from Azure Monitor (explained above) +1. Open Log Analytics Workspace Insights from Azure Monitor (as previously explained). -2. Select a workspace to drill into +1. Select a workspace to drill into. -Or +Or use these steps: -1. In the Azure portal, select **Log Analytics Workspaces** +1. In the Azure portal, select **Log Analytics Workspaces**. -2. Choose a Log Analytics Workspace +1. Choose a Log Analytics workspace. -3. Select **Insights** on the Workspace menu (under Monitoring) --The data is organized in tabs, and the time range on top (defaults to 24 hours) applies to all tabs. Some charts and tables use a different time range, as indicated in their titles. +1. Under **Monitoring**, select **Insights** on the workspace menu. +The data is organized in tabs. The time range on top defaults to 24 hours and applies to all tabs. Some charts and tables use a different time range, as indicated in their titles. ## Overview tab -On the **Overview** tab you can see: --* Main stats and settings - - The monthly ingestion volume of the workspace - - How many machines sent heartbeats, meaning - machines that are connected to this workspace (in the selected time range) - - Machines that haven't sent heartbeats in the last hour (in the selected time range) - - The data retention period set - - The daily cap set, and how much data was already ingested on the recent day +On the **Overview** tab, you can see: -* Top 5 tables ΓÇô charts analyzing the 5 most ingested tables, over the past month. - - Volume of data ingested to each table - - The daily ingestion to each of them - to visually display spikes or dips - - Ingestion anomalies - a list of identified spikes and dips in ingestion to these tables +* **Main statistics and settings**: + - The monthly ingestion volume of the workspace. + - How many machines sent heartbeats. That is, the machines that are connected to this workspace in the selected time range. + - Machines that haven't sent heartbeats in the last hour in the selected time range. + - The data retention period set. + - The daily cap set and how much data was already ingested on the recent day. +* **Top five tables**: Charts that analyze the five most-ingested tables over the past month: + - The volume of data ingested to each table. + - The daily ingestion to each of them to visually display spikes or dips. + - Ingestion anomalies: A list of identified spikes and dips in ingestion to these tables. ## Usage tab -### Usage dashboard +This tab provides a dashboard display. -This tab provides information on the workspace's usage. -The dashboard sub-tab shows ingestion data of by tables, and defaults to the 5 most ingested tables in the selected time range (same tables displayed in the Overview page). You can choose which tables to display through the Workspace Tables dropdown. +### Usage dashboard +This tab provides information on the workspace's usage. The dashboard subtab shows ingestion data displayed in tables. It defaults to the five most-ingested tables in the selected time range. These same tables are displayed on the **Overview** page. Use the **Workspace Tables** dropdown to choose which tables to display. -* Main grid - here you can see tables grouped by solutions, and information about each table" - - How much data was ingested to it (during the selected time range) - - The percentage this table takes, from the entire ingestion volume (during the selected time range). That helps identify the tables that affect your ingestion the most. In the below screenshot you can see AzureDiagnostics and ContainerLog alone stand for over 2 thirds (64%) of the data ingested to this workspace. - - When was the last update of usage statistics regarding each table - we normally expect usage stats to refresh hourly. Since refreshing usage statistics is a recurrent service-internal operation, a delay in refreshing that data is only noted so you would know to interpret the data correctly. There is no action you (as a user) should take. - - Billable - indicates which tables are billed for, and which are free. -* Table-specific details +* **Main grid**: Tables are grouped by solutions with information about each table: + - How much data was ingested to it during the selected time range. + - The percentage this table takes from the entire ingestion volume during the selected time range: This information helps identify the tables that affect your ingestion the most. In the following screenshot, you can see `AzureDiagnostics` and `ContainerLog` alone stand for more than two-thirds (64%) of the data ingested to this workspace. + - The last update of usage statistics regarding each table: We normally expect usage statistics to refresh hourly. Refreshing usage statistics is a recurrent service-internal operation. A delay in refreshing that data is only noted so that you know to interpret the data correctly. There's no action you should take. + - **Billable**: Indicates which tables are billed for and which are free. - On the bottom of the page, you can see detailed information on the table selected in the main grid. - - Ingestion volume - how much data was ingested to the table from each resource, and how it spreads over time. Resources ingesting over 30% of the total volume sent to this table are marked with a warning sign, for you to take note of. - - Ingestion latency - how much time ingestion took, analyzed for the 50th, 90th or 95th percentiles of requests sent to this table. The top chart in this area depicts the total ingestion time of the requests (for the selected percentile) from end to end - from the time the event occurred, and until it was ingested to the workspace. - The chart below it shows separately the latency of the agent (the time it took the agent to send the log to the workspace) and that of the pipeline (the time it took the service to process the data and push it to the workspace). - :::image type="content" source="media/log-analytics-workspace-insights-overview/workspace-usage-ingestion-latency.png" alt-text="Screenshot of the workspace usage ingestion latency sub-tab" lightbox="media/log-analytics-workspace-insights-overview/workspace-usage-ingestion-latency.png"::: +* **Table-specific details**: At the bottom of the page, you can see detailed information on the table selected in the main grid: + - **Ingestion volume**: How much data was ingested to the table from each resource and how it spreads over time. Resources ingesting more than 30% of the total volume sent to this table are marked with a warning sign. + - **Ingestion latency**: How much time ingestion took, analyzed for the 50th, 90th, or 95th percentiles of requests sent to this table. The top chart in this area depicts the total ingestion time of the requests for the selected percentile from end to end. It spans from the time the event occurred until it was ingested to the workspace. + + The chart below it shows separately the latency of the agent, which is the time it took the agent to send the log to the workspace. The chart also shows the latency of the pipeline, which is the time it took the service to process the data and push it to the workspace. + :::image type="content" source="media/log-analytics-workspace-insights-overview/workspace-usage-ingestion-latency.png" alt-text="Screenshot that shows the workspace Usage tab Ingestion Latency subtab." lightbox="media/log-analytics-workspace-insights-overview/workspace-usage-ingestion-latency.png"::: ### Additional usage queries -The Additional queries sub-tab exposes queries that run across all workspace tables (instead of relying on the usage metadata, refreshed hourly). Since their queries are much more extensive and less efficient, they are not run automatically. However, they can surface interesting information about which resources send most logs to the workspace, and perhaps affect billing. +The **Additional Queries** subtab exposes queries that run across all workspace tables (instead of relying on the usage metadata, which is refreshed hourly). Because the queries are much more extensive and less efficient, they don't run automatically. They can reveal interesting information about which resources send the most logs to the workspace and perhaps affect billing. -One such query is 'What Azure resources send most logs to this workspace' (showing top 50). -In our demo workspace, you can clearly see that 3 Kuberbetes clusters send far more data than all other resources combined, and a particular one of them loads the workspace most. -+One such query is **What Azure resources send most logs to this workspace** (showing the top 50). +In the demo workspace, you can clearly see that three Kubernetes clusters send far more data than all other resources combined. One cluster loads the workspace the most. ## Health tab -This tab shows the workspace health state and when it was last reported, as well as operational [errors and warnings](../logs/monitor-workspace.md) (retrieved from the _LogOperation table). You can find more details on the listed issues as well as mitigation steps in [here](../logs/monitor-workspace.md#categories). +This tab shows the workspace health state, when it was last reported, and operational [errors and warnings](../logs/monitor-workspace.md) retrieved from the `_LogOperation` table. For more information on the listed issues and mitigation steps, see [Monitor health of a Log Analytics workspace in Azure Monitor](../logs/monitor-workspace.md#categories). ## Agents tab -This tab provides information on the agents sending logs to this workspace. +This tab provides information on the agents that send logs to this workspace. -* Operation errors and warnings - these are errors and warning related specifically to agents. They are grouped by the error/warning title to help you get a clearer view of different issues that may occur, but can be expanded to show the exact times and resources they refer to. Also note you can click 'Run query in Logs' to query the _LogOperation table through the Logs experience, see the raw data and analyze if further. -* Workspace agents - these are the agents that sent logs to the workspace during the selected time range. You can see the agents' types and health state. Agents marked healthy aren't necessarily working well - it only indicated they sent a heartbeat during the last hour. A more detailed health state is detailed in the below grid. -* Agents activity - this grid shows information on either all agents, healthy or unhealthy agents. Here too "Healthy" only indicated the agent send a heartbeat during the last hour. To understand its state better, review the trend shown in the grid - it shows how many heartbeats this agent sent over time. The true health state can only be inferred if you know how the monitored resource operates, for example - If a computer is intentionally shut down at particular times, you can expect the agent's heartbeats to appear intermittenly, in a matching pattern. +* **Operation errors and warnings**: These errors and warnings are related specifically to agents. They're grouped by the error/warning title to help you get a clearer view of different issues that might occur. They can be expanded to show the exact times and resources to which they refer. You can select **Run query in Logs** to query the `_LogOperation` table through the Logs experience to see the raw data and analyze it further. +* **Workspace agents**: These agents are the ones that sent logs to the workspace during the selected time range. You can see the types and health state of the agents. Agents marked **Healthy** aren't necessarily working well. This designation only indicates that they sent a heartbeat during the last hour. A more detailed health state is described in the grid. +* **Agents activity**: This grid shows information on either all agents or healthy or unhealthy agents. Here too **Healthy** only indicates that the agent sent a heartbeat during the last hour. To understand its state better, review the trend shown in the grid. It shows how many heartbeats this agent sent over time. The true health state can only be inferred if you know how the monitored resource operates. For example, if a computer is intentionally shut down at particular times, you can expect the agent's heartbeats to appear intermittently, in a matching pattern. +## Query Audit tab -## Query audit tab +Query auditing creates logs about the execution of queries on the workspace. If enabled, this data is beneficial to understanding and improving the performance, efficiency, and load for queries. To enable query auditing on your workspace or learn more about it, see [Audit queries in Azure Monitor Logs](../logs/query-audit.md). -Query auditing creates logs about the execution of queries on the workspace. If enabled, this data is greatly beneficial to understanding and improving queries performance, efficiency and load. To enable query auditing on your workspace or learn more about it, see [Audit queries in Azure Monitor Logs](../logs/query-audit.md). +#### Performance -#### Performance This tab shows:-* Query duration - 95th percentile and 50th percentile (median) duration in ms, over time. -* Number of rows returned - 95th percentile and 50th percentile (median) of rows count, over time. -* The volume of data processed - 95th percentile, 50th percentile, and the total of processed data in all requests, over time. -* Response codes - the distribution of response codes to all queries in the selected time range. +* **Query duration**: The 95th percentile and 50th percentile (median) duration in ms, over time. +* **Number of rows returned**: The 95th percentile and 50th percentile (median) of rows count, over time. +* **The volume of data processed**: The 95th percentile, 50th percentile, and the total of processed data in all requests, over time. +* **Response code**s: The distribution of response codes to all queries in the selected time range. + -### Slow and inefficient queries -This tab shows two grids to help you identify slow and inefficient queries you may want to re-think. These queries should not be used in dashboards or alerts, since they will create unneeded chronic load on your workspace. -* Most resource-intensive queries - the 10 most CPU-demanding queries, along with the volume of data processed (KB), the time range and text of each query. -* Slowest queries - the 10 slowest queries, along with the time range and text of each query. +### Slow and inefficient queries +The **Slow & Inefficient Queries** subtab shows two grids to help you identify slow and inefficient queries you might want to rethink. These queries shouldn't be used in dashboards or alerts because they'll create unneeded chronic load on your workspace. +* **Most resource-intensive queries**: The 10 most CPU-demanding queries, along with the volume of data processed (KB), the time range, and the text of each query. +* **Slowest queries**: The 10 slowest queries, along with the time range and text of each query. -### Query users -This tab shows users activity against this workspace: -* Queries by user - how many queries each user ran in the selected time range -* Throttled users - users that ran queries that were throttled (due to over-querying the workspace) +### Query users +The **Users** subtab shows user activity against this workspace: -## Change log tab +* **Queries by user**: How many queries each user ran in the selected time range. +* **Throttled users**: Users that ran queries that were throttled because of over-querying the workspace. -This tab shows configuration changes made on the workspace during the last 90 days (regardless of the time range selected), and who performed them. -It is intended to help you monitor who changes important workspace settings, such as data capping or workspace license. +## Change Log tab +This tab shows configuration changes made on the workspace during the last 90 days regardless of the time range selected. It also shows who made the changes. It's intended to help you monitor who changes important workspace settings, such as data capping or workspace license. ## Next steps -Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md). +To learn the scenarios that workbooks are designed to support and how to author new and customize existing reports, see [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md). |
azure-monitor | Private Link Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md | Title: Configure your Private Link -description: Configure Private Link + Title: Configure your private link +description: This article shows the steps to configure a private link. Last updated 1/5/2022 -# Configure your Private Link -Configuring a Private Link requires a few steps: -* Creating a Private Link Scope with resources -* Creating a Private Endpoint on your network and connecting it to the scope -* Configuring the required access on your Azure Monitor resources. +# Configure your private link +Configuring an instance of Azure Private Link requires you to: -This article reviews how it's done through the Azure portal and provides an example Azure Resource Manager (ARM) template to automate the process. +* Create an Azure Monitor Private Link Scope (AMPLS) with resources. +* Create a private endpoint on your network and connect it to the scope. +* Configure the required access on your Azure Monitor resources. -## Create a Private Link connection through the Azure portal -In this section, we review the process of setting up a Private Link through the Azure portal, step by step. See [Use APIs and command line](#use-apis-and-command-line) to create and manage a Private Link using the command line or an Azure Resource Manager template (ARM template). +This article reviews how configuration is done through the Azure portal. It provides an example Azure Resource Manager template (ARM template) to automate the process. ++## Create a private link connection through the Azure portal +In this section, we review the step-by-step process of setting up a private link through the Azure portal. To create and manage a private link by using the command line or an ARM template, see [Use APIs and the command line](#use-apis-and-the-command-line). ### Create an Azure Monitor Private Link Scope 1. Go to **Create a resource** in the Azure portal and search for **Azure Monitor Private Link Scope**. -  +  -2. Select **create**. -3. Pick a Subscription and Resource Group. -4. Give the AMPLS a name. It's best to use a meaningful and clear name, such as "AppServerProdTelem". -5. Select **Review + Create**. +1. Select **Create**. +1. Select a subscription and resource group. +1. Give the AMPLS a name. Use a meaningful and clear name like *AppServerProdTelem*. +1. Select **Review + create**. -  +  -6. Let the validation pass, and then select **Create**. +1. Let the validation pass and select **Create**. ### Connect Azure Monitor resources -Connect Azure Monitor resources (Log Analytics workspaces, Application Insights components and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)) to your AMPLS. +Connect Azure Monitor resources like Log Analytics workspaces, Application Insights components, and [data collection endpoints](../essentials/data-collection-endpoint-overview.md)) to your AMPLS. -1. In your Azure Monitor Private Link scope, select **Azure Monitor Resources** in the left-hand menu. Select the **Add** button. -2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can type in their name to filter down to them. Select the workspace or component and select **Apply** to add them to your scope. +1. In your AMPLS, select **Azure Monitor Resources** in the menu on the left. Select **Add**. +1. Add the workspace or component. Selecting **Add** opens a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups. You can also enter their names to filter down to them. Select the workspace or component and select **Apply** to add them to your scope. -  +  > [!NOTE]-> Deleting Azure Monitor resources requires that you first disconnect them from any AMPLS objects they are connected to. It's not possible to delete resources connected to an AMPLS. +> Deleting Azure Monitor resources requires that you first disconnect them from any AMPLS objects they're connected to. It's not possible to delete resources connected to an AMPLS. ### Connect to a private endpoint -Now that you have resources connected to your AMPLS, create a private endpoint to connect our network. You can do this task in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints), or inside your Azure Monitor Private Link Scope, as done in this example. +Now that you have resources connected to your AMPLS, create a private endpoint to connect your network. You can do this task in the [Azure portal Private Link Center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) or inside your AMPLS, as done in this example. -1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Private Endpoint** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**. +1. In your scope resource, select **Private Endpoint connections** from the resource menu on the left. Select **Private Endpoint** to start the endpoint creation process. You can also approve connections that were started in the Private Link Center here by selecting them and selecting **Approve**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-connect-3.png" alt-text="Screenshot of Private Endpoint Connections UX." lightbox="./media/private-link-security/ampls-select-private-endpoint-connect-3.png"::: + :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-connect-3.png" alt-text="Screenshot that shows Private Endpoint connections." lightbox="./media/private-link-security/ampls-select-private-endpoint-connect-3.png"::: -2. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the VNet you connect it to. +1. Select the subscription, resource group, name of the endpoint, and the region it should live in. The region must be the same region as the virtual network to which you connect it. -3. Select **Next: Resource**. +1. Select **Next: Resource**. -4. In the Resource tab: - 1. Pick the **Subscription** that contains your Azure Monitor Private Scope resource. - 1. For **resource type**, choose **Microsoft.insights/privateLinkScopes**. - 1. From the **resource** drop-down, choose your Private Link scope you created earlier. - 1. Select **Next: Virtual Network >**. +1. On the **Resource** tab: + 1. Select the subscription that contains your Azure Monitor Private Link Scope resource. + 1. For **Resource type**, select **Microsoft.insights/privateLinkScopes**. + 1. From the **Resource** dropdown, select the Private Link Scope you created earlier. + 1. Select **Next: Virtual Network**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-4.png" alt-text="Screenshot of the Create a private endpoint page in the Azure portal with the Resource tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-4.png"::: + :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-4.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the Resource tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-4.png"::: -5. On the Virtual Network tab: - 1. Choose the **virtual network** and **subnet** that you want to connect to your Azure Monitor resources. - 1. For Network policy for private endpoints, select **edit** if you want to apply Network security groups and/or Route tables to the subnet that contains the private endpoint. In **Edit subnet network policy**, select the checkbox next to **Network security groups** and **Route Tables**. Select **Save**. +1. On the **Virtual Network** tab: + 1. Select the virtual network and subnet that you want to connect to your Azure Monitor resources. + 1. For **Network policy for private endpoints**, select **edit** if you want to apply network security groups or Route tables to the subnet that contains the private endpoint. In **Edit subnet network policy**, select the checkboxes next to **Network security groups** and **Route tables**. Select **Save**. For more information, see [Manage network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md). - 1. For Private IP configuration, by default, **Dynamically allocate IP address** is selected. If you want to assign a static IP address, select **Statically allocate IP address** and then enter a **Name** and **Private IP**. - 1. Optionally, you can select or create an **Application security group**. Application security groups allow you to group virtual machines and define network security policies based on those groups. - 1. Select **Next: DNS >**. + 1. For **Private IP configuration**, by default, **Dynamically allocate IP address** is selected. If you want to assign a static IP address, select **Statically allocate IP address**. Then enter a name and private IP. + 1. Optionally, you can select or create an **Application security group**. You can use application security groups to group virtual machines and define network security policies based on those groups. + 1. Select **Next: DNS**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-5.png" alt-text="Screenshot of the Create a private endpoint page in the Azure portal with the Virtual Network tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-5.png"::: + :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-5.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the Virtual Network tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-5.png"::: ++1. On the **DNS** tab: + 1. Select **Yes** for **Integrate with private DNS zone**, and let it automatically create a new private DNS zone. The actual DNS zones might be different from what's shown in the following screenshot. - -6. On the DNS tab: - 1. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below. > [!NOTE]- > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Monitor. + > If you select **No** and prefer to manage DNS records manually, first finish setting up your private link. Include this private endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure private endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your private link setup. The DNS records you create can override existing settings and affect your connectivity with Azure Monitor. 1. Select **Review + create**. - :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-6.png" alt-text="Screenshot of the Create a private endpoint page in the Azure portal with the DNS tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-6.png"::: + :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-6.png" alt-text="Screenshot that shows the Create a private endpoint page in the Azure portal with the DNS tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-6.png"::: -7. On the Review + create tab: +1. On the **Review + create** tab: 1. Let validation pass.- 1. Select **Create**. --You've now created a new private endpoint that is connected to this AMPLS. + 1. Select **Create**. +You've now created a new private endpoint that's connected to this AMPLS. ## Configure access to your resources-So far we covered the configuration of your network, but you should also consider how you want to configure network access to your monitored resources - Log Analytics workspaces, Application Insights components and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md). +So far we've covered the configuration of your network. But you should also consider how you want to configure network access to your monitored resources like Log Analytics workspaces, Application Insights components, and [data collection endpoints](../essentials/data-collection-endpoint-overview.md). -Go to the Azure portal. In your resource's menu, there's a menu item called **Network Isolation** on the left-hand side. This page controls both which networks can reach the resource through a Private Link, and whether other networks can reach it or not. +Go to the Azure portal. On your resource's menu, find **Network Isolation** on the left side. This page controls which networks can reach the resource through a private link and whether other networks can reach it or not. + - +### Connected Azure Monitor Private Link Scopes +Here you can review and configure the resource's connections to an AMPLS. Connecting to an AMPLS allows traffic from the virtual network connected to each AMPLS to reach the resource. It has the same effect as connecting it from the scope as we did in the section [Connect Azure Monitor resources](#connect-azure-monitor-resources). -### Connected Azure Monitor Private Link scopes -Here you can review and configure the resource's connections to Azure Monitor Private Links scopes. Connecting to scopes (AMPLSs) allows traffic from the virtual network connected to each AMPLS to reach the resource. It has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). +To add a new connection, select **Add** and select the AMPLS. Select **Apply** to connect it. Your resource can connect to five AMPLS objects, as mentioned in [Consider AMPLS limits](./private-link-design.md#consider-ampls-limits). -To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to five AMPLS objects, as mentioned in [Consider AMPLS limits](./private-link-design.md#consider-ampls-limits). +### Virtual networks access configuration: Manage access from outside of a Private Link Scope +The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes. -### Virtual networks access configuration - Managing access from outside of private links scopes -The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs). +If you set **Accept data ingestion from public networks not connected through a Private Link Scope** to **No**, clients like machines or SDKs outside of the connected scopes can't upload data or send logs to the resource. -If you set **Accept data ingestion from public networks not connected through a Private Link Scope** to **No**, then clients (machines, SDKs, etc.) outside of the connected scopes can't upload data or send logs to the resource. +If you set **Accept queries from public networks not connected through a Private Link Scope** to **No**, clients like machines or SDKs outside of the connected scopes can't query data in the resource. -If you set **Accept queries from public networks not connected through a Private Link Scope** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET. +That data includes access to logs, metrics, and the live metrics stream. It also includes experiences built on top such as workbooks, dashboards, query API-based client experiences, and insights in the Azure portal. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked virtual network. +## Use APIs and the command line -## Use APIs and command line +You can automate the process described earlier by using ARM templates, REST, and command-line interfaces. -You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces. +### Create and manage Private Link Scopes +To create and manage Private Link Scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or the [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope). -### Create and manage Azure Monitor Private Link Scopes (AMPLS) -To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope). +#### Create an AMPLS with Open access modes: CLI example +The following CLI command creates a new AMPLS resource named `"my-scope"`, with both query and ingestion access modes set to `Open`. -#### Create AMPLS with Open access modes - CLI example -The below CLI command creates a new AMPLS resource named "my-scope", with both query and ingestion access modes set to Open. ``` az resource create -g "my-resource-group" --name "my-scope" --api-version "2021-07-01-preview" --resource-type Microsoft.Insights/privateLinkScopes --properties "{\"accessModeSettings\":{\"queryAccessMode\":\"Open\", \"ingestionAccessMode\":\"Open\"}}" ``` -#### Create AMPLS with mixed access modes - PowerShell example -The below PowerShell script creates a new AMPLS resource named "my-scope", with the query access mode Open but the ingestion access modes set to PrivateOnly (meaning it will allow ingestion only to resources in the AMPLS). +#### Create an AMPLS with mixed access modes: PowerShell example +The following PowerShell script creates a new AMPLS resource named `"my-scope"`, with the query access mode set to `Open` but the ingestion access modes set to `PrivateOnly`. This setting means it will allow ingestion only to resources in the AMPLS. ``` # scope details Select-AzSubscription -SubscriptionId $scopeSubscriptionId $scope = New-AzResource -Location "Global" -Properties $scopeProperties -ResourceName $scopeName -ResourceType "Microsoft.Insights/privateLinkScopes" -ResourceGroupName $scopeResourceGroup -ApiVersion "2021-07-01-preview" -Force ``` -#### Create AMPLS - Azure Resource Manager template (ARM template) -The below Azure Resource Manager template creates: -* A private link scope (AMPLS) named "my-scope", with query and ingestion access modes set to Open. -* A Log Analytics workspace named "my-workspace" -* Adds a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection" +#### Create an AMPLS: ARM template +The following ARM template creates: ++* An AMPLS named `"my-scope"`, with query and ingestion access modes set to `Open`. +* A Log Analytics workspace named `"my-workspace"`. +* And adds a scoped resource to the `"my-scope"` AMPLS named `"my-workspace-connection"`. > [!NOTE]-> Make sure you use a new API version (2021-07-01-preview or later) for the creation of the Private Link Scope object (type 'microsoft.insights/privatelinkscopes' below). The ARM template documented in the past used an old API version, which results in an AMPLS set with QueryAccessMode="Open" and IngestionAccessMode="PrivateOnly". +> Make sure you use a new API version (2021-07-01-preview or later) for the creation of the AMPLS object (type `microsoft.insights/privatelinkscopes` as follows). The ARM template documented in the past used an old API version, which results in an AMPLS set with `QueryAccessMode="Open"` and `IngestionAccessMode="PrivateOnly"`. ``` { The below Azure Resource Manager template creates: } ``` -### Set AMPLS access modes - PowerShell example -To set the access mode flags on your AMPLS, you can use the following PowerShell script. The following script sets the flags to Open. To use the Private Only mode, use the value "PrivateOnly". +### Set AMPLS access modes: PowerShell example +To set the access mode flags on your AMPLS, you can use the following PowerShell script. The following script sets the flags to `Open`. To use the Private Only mode, use the value `"PrivateOnly"`. -Allow ~10 minutes for the AMPLS access modes update to take effect. +Allow about 10 minutes for the AMPLS access modes update to take effect. ``` # scope details $scope | Set-AzResource -Force ### Set resource access flags To manage the workspace or component access flags, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [az monitor log-analytics workspace](/cli/azure/monitor/log-analytics/workspace) or [az monitor app-insights component](/cli/azure/monitor/app-insights/component). +## Review and validate your private link setup -## Review and validate your Private Link setup +Follow the steps in this section to review and validate your private link setup. -### Reviewing your Endpoint's DNS settings -The Private Endpoint you created should now have an five DNS zones configured: +### Review your endpoint's DNS settings +The private endpoint you created should now have five DNS zones configured: -* privatelink-monitor-azure-com -* privatelink-oms-opinsights-azure-com -* privatelink-ods-opinsights-azure-com -* privatelink-agentsvc-azure-automation-net -* privatelink-blob-core-windows-net +* `privatelink-monitor-azure-com` +* `privatelink-oms-opinsights-azure-com` +* `privatelink-ods-opinsights-azure-com` +* `privatelink-agentsvc-azure-automation-net` +* `privatelink-blob-core-windows-net` -> [!NOTE] -> Each of these zones maps specific Azure Monitor endpoints to private IPs from the VNet's pool of IPs. The IP addresses shown in the below images are only examples. Your configuration should instead show private IPs from your own network. +Each of these zones maps specific Azure Monitor endpoints to private IPs from the virtual network's pool of IPs. The IP addresses shown in the following images are only examples. Your configuration should instead show private IPs from your own network. > [!IMPORTANT]-> AMPLS and Private Endpoint resources created starting December 1, 2021, use a mechanism called Endpoint Compression. This means resource-specific endpoints (such as the OMS, ODS and AgentSVC endpoints) share the same IP address, per region and per DNS zone. This mechanism means less IPs are taken from the VNet's IP pool, and many more resources can be added to the AMPLS. +> AMPLS and private endpoint resources created starting December 1, 2021, use a mechanism called Endpoint Compression. Now resource-specific endpoints, such as the OMS, ODS, and AgentSVC endpoints, share the same IP address, per region and per DNS zone. This mechanism means fewer IPs are taken from the virtual network's IP pool, and many more resources can be added to the AMPLS. #### Privatelink-monitor-azure-com-This zone covers the global endpoints used by Azure Monitor, meaning endpoints that serve requests globally/regionally and not resource-specific requests. This zone should have endpoints mapped for: -* **in.ai** - Application Insights ingestion endpoint (both a global and a regional entry) -* **api** - Application Insights and Log Analytics API endpoint -* **live** - Application Insights live metrics endpoint -* **profiler** - Application Insights profiler endpoint -* **snapshot** - Application Insights snapshots endpoint -* **diagservices-query** - Application Insights Profiler and Snapshot Debugger (used when accessing profiler/debugger results in the Azure portal) +This zone covers the global endpoints used by Azure Monitor, which means endpoints serve requests globally/regionally and not resource-specific requests. This zone should have endpoints mapped for: -This zone also covers the resource specific endpoints for [Data Collection Endpoints](../essentials/data-collection-endpoint-overview.md): -* `<unique-dce-identifier>.<regionname>.handler.control` - Private configuration endpoint, part of a Data Collection Endpoint (DCE) resource -* `<unique-dce-identifier>.<regionname>.ingest` - Private ingestion endpoint, part of a Data Collection Endpoint (DCE) resource +* **in.ai**: Application Insights ingestion endpoint (both a global and a regional entry). +* **api**: Application Insights and Log Analytics API endpoint. +* **live**: Application Insights live metrics endpoint. +* **profiler**: Application Insights profiler endpoint. +* **snapshot**: Application Insights snapshot endpoint. +* **diagservices-query**: Application Insights Profiler and Snapshot Debugger (used when accessing profiler/debugger results in the Azure portal). -[](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded-with-endpoint.png#lightbox) +This zone also covers the resource-specific endpoints for [data collection endpoints (DCEs)](../essentials/data-collection-endpoint-overview.md): +* `<unique-dce-identifier>.<regionname>.handler.control`: Private configuration endpoint, part of a DCE resource. +* `<unique-dce-identifier>.<regionname>.ingest`: Private ingestion endpoint, part of a DCE resource. ++[](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded-with-endpoint.png#lightbox) #### Log Analytics endpoints > [!IMPORTANT]-> AMPLSs and Private Endpoints created starting December 1, 2021, use a mechanism called Endpoint Compression. This means each resource-specific endpoint (such as OMS, ODS and AgentSVC) now uses a single IP address, per region and per DNS zone, for all workspaces in that region. This mechanism means less IPs are taken from the VNet's IP pool, and many more resources can be added to the AMPLS. -Log Analytics uses 4 DNS zones: -* **privatelink-oms-opinsights-azure-com** - covers workspace-specific mapping to OMS endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint. -* **privatelink-ods-opinsights-azure-com** - covers workspace-specific mapping to ODS endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint. -* **privatelink-agentsvc-azure-automation-net** - covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint. -* **privatelink-blob-core-windows-net** - configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle all Log Analytics agents, no matter how many workspaces are used. This entry is only added to Private Links setups created at or after April 19, 2021 (or starting June 2021, on Azure Sovereign clouds) +> AMPLSs and private endpoints created starting December 1, 2021, use a mechanism called Endpoint Compression. Now each resource-specific endpoint, such as OMS, ODS, and AgentSVC, uses a single IP address, per region and per DNS zone, for all workspaces in that region. This mechanism means fewer IPs are taken from the virtual network's IP pool, and many more resources can be added to the AMPLS. -The below screenshot shows endpoints mapped for an AMPLS with two workspaces in East US and one workspace in West Europe. Notice the East US workspaces share the IP addresses, while the West Europe workspace endpoint is mapped to a different IP address. (the blob endpoint isn't showing in this image, but is configured). +Log Analytics uses four DNS zones: -[](./media/private-link-security/dns-zone-privatelink-compressed-endpoints.png#lightbox) +* **privatelink-oms-opinsights-azure-com**: Covers workspace-specific mapping to OMS endpoints. You should see an entry for each workspace linked to the AMPLS connected with this private endpoint. +* **privatelink-ods-opinsights-azure-com**: Covers workspace-specific mapping to ODS endpoints, which are the ingestion endpoints of Log Analytics. You should see an entry for each workspace linked to the AMPLS connected with this private endpoint. +* **privatelink-agentsvc-azure-automation-net**: Covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the AMPLS connected with this private endpoint. +* **privatelink-blob-core-windows-net**: Configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs, which are also known as management packs. Only one entry is required to handle all Log Analytics agents, no matter how many workspaces are used. This entry is only added to private link setups created at or after April 19, 2021 (or starting June 2021 on Azure sovereign clouds). +The following screenshot shows endpoints mapped for an AMPLS with two workspaces in East US and one workspace in West Europe. Notice the East US workspaces share the IP addresses. The West Europe workspace endpoint is mapped to a different IP address. The blob endpoint doesn't appear in this image but it's configured. -### Validating you are communicating over a Private Link -* To validate your requests are now sent through the Private Endpoint, you can review them with a network tracking tool or even your browser. For example, when attempting to query your workspace or application, make sure the request is sent to the private IP mapped to the API endpoint, in this example it's *172.17.0.9*. +[](./media/private-link-security/dns-zone-privatelink-compressed-endpoints.png#lightbox) - Note: Some browsers may use other DNS settings (see [Browser DNS settings](./private-link-design.md#browser-dns-settings)). Make sure your DNS settings apply. +### Validate that you're communicating over a private link -* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Configure access to your resources](#configure-access-to-your-resources). +Make sure that your private link is in good working order: -* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default. +* To validate that your requests are now sent through the private endpoint, you can review them with a network tracking tool or even your browser. For example, when you attempt to query your workspace or application, make sure the request is sent to the private IP mapped to the API endpoint. In this example, it's *172.17.0.9*. + > [!Note] + > Some browsers might use other DNS settings. For more information, see [Browser DNS settings](./private-link-design.md#browser-dns-settings). Make sure your DNS settings apply. ++* To make sure your workspaces or components aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to **No** as explained in [Configure access to your resources](#configure-access-to-your-resources). +* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default. ## Next steps -- Learn about [private storage](private-storage.md) for Custom Logs and Customer managed keys (CMK)-- Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md)-- Learn about the new [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)+- Learn about [private storage](private-storage.md) for custom logs and customer-managed keys. +- Learn about [Private Link for Azure Automation](../../automation/how-to/private-link-security.md). +- Learn about the new [data collection endpoints](../essentials/data-collection-endpoint-overview.md). |
azure-monitor | Private Link Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md | Title: Design your Private Link setup -description: Design your Private Link setup + Title: Design your Azure Private Link setup +description: This article shows how to design your Azure Private Link setup. Last updated 12/14/2022 -# Design your Private Link setup +# Design your Azure Private Link setup -Before you set up your Azure Monitor Private Link, consider your network topology, and specifically your DNS routing topology. +Before you set up your instance of Azure Private Link, consider your network topology and your DNS routing topology. -As discussed in the [Azure Monitor Private Link overview article](private-link-security.md), setting up a Private Link affects traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the Private Endpoint but also all other networks sharing the same DNS. +As discussed in [Use Azure Private Link to connect networks to Azure Monitor](private-link-security.md), setting up a private link affects traffic to all Azure Monitor resources. That's especially true for Application Insights resources. It also affects not only the network connected to the private endpoint but also all other networks that share the same DNS. -The simplest and most secure approach would be: -1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet. -2. Add *all* Azure Monitor resources (Application Insights components, Log Analytics workspaces and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)) to that AMPLS. -3. Block network egress traffic as much as possible. +The simplest and most secure approach: +1. Create a single private link connection, with a single private endpoint and a single Azure Monitor Private Link Scope (AMPLS). If your networks are peered, create the private link connection on the shared (or hub) virtual network. +1. Add *all* Azure Monitor resources like Application Insights components, Log Analytics workspaces, and [data collection endpoints](../essentials/data-collection-endpoint-overview.md) to the AMPLS. +1. Block network egress traffic as much as possible. -If you can't add all Azure Monitor resources to your AMPLS, you can still apply your Private Link to some resources, as explained in [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks). While useful, this approach is less recommended since it doesn't prevent data exfiltration. +If you can't add all Azure Monitor resources to your AMPLS, you can still apply your private link to some resources, as explained in [Control how private links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks). We don't recommend this approach because it doesn't prevent data exfiltration. ## Plan by network topology +Consider network topology in your planning process. + ### Guiding principle: Avoid DNS overrides by using a single AMPLS-Some networks are composed of multiple VNets or other connected networks. If these networks share the same DNS, setting up a Private Link on any of them would update the DNS and affect traffic across all networks. +Some networks are composed of multiple virtual networks or other connected networks. If these networks share the same DNS, setting up a private link on any of them would update the DNS and affect traffic across all networks. -In the below diagram, VNet 10.0.1.x connects to AMPLS1 which creates DNS entries mapping Azure Monitor endpoints to IPs from range 10.0.1.x. Later, VNet 10.0.2.x connects to AMPLS2, which overrides the same DNS entries by mapping **the same global/regional endpoints** to IPs from the range 10.0.2.x. Since these VNets aren't peered, the first VNet now fails to reach these endpoints. +In the following diagram, virtual network 10.0.1.x connects to AMPLS1, which creates DNS entries that map Azure Monitor endpoints to IPs from range 10.0.1.x. Later, virtual network 10.0.2.x connects to AMPLS2, which overrides the same DNS entries by mapping *the same global/regional endpoints* to IPs from the range 10.0.2.x. Because these virtual networks aren't peered, the first virtual network now fails to reach these endpoints. To avoid this conflict, create only a single AMPLS object per DNS. - -+ ### Hub-and-spoke networks-Hub-and-spoke networks should use a single Private Link connection set on the hub (main) network, and not on each spoke VNet. +Hub-and-spoke networks should use a single private link connection set on the hub (main) network, and not on each spoke virtual network. - + > [!NOTE]-> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but **must also verify they don't share the same DNS zones in order to avoid DNS overrides**. +> You might prefer to create separate private links for your spoke virtual networks, for example, to allow each virtual network to access a limited set of monitoring resources. In such cases, you can create a dedicated private endpoint and AMPLS for each virtual network. *You must also verify they don't share the same DNS zones to avoid DNS overrides*. ### Peered networks-Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is once again to create a single Private Link on a network that's accessible to your other networks. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS applies. +Network peering is used in various topologies, other than hub and spoke. Such networks can share each other's IP addresses, and most likely share the same DNS. In such cases, create a single private link on a network that's accessible to your other networks. Avoid creating multiple private endpoints and AMPLS objects because ultimately only the last one set in the DNS applies. ### Isolated networks-If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. After that's done, create a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones. +If your networks aren't peered, *you must also separate their DNS to use private links*. After that's done, create a separate private endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components or to different ones. ++### Testing locally: Edit your machine's hosts file instead of the DNS +To test private links locally without affecting other clients on your network, make sure not to update your DNS when you create your private endpoint. Instead, edit the hosts file on your machine so that it will send requests to the private link endpoints: -### Testing locally: Edit your machine's hosts file instead of the DNS -To test Private Links locally without affecting other clients on your network, make sure Not to update your DNS when you create your Private Endpoint. Instead, edit the hosts file on your machine so it will send requests to the Private Link endpoints: -* Set up a Private Link, but when connecting to a Private Endpoint choose **not** to auto-integrate with the DNS (step 5b). -* Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your Endpoint's DNS settings](./private-link-configure.md#reviewing-your-endpoints-dns-settings). +* Set up a private link, but when you connect to a private endpoint, choose *not* to auto-integrate with the DNS (step 5b). +* Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Review your endpoint's DNS settings](./private-link-configure.md#review-your-endpoints-dns-settings). -That approach isn't recommended for production environments. +We don't recommend that approach for production environments. -## Control how Private Links apply to your networks -Private Link access modes allow you to control how Private Links affect your network traffic. These settings can apply to your AMPLS object (to affect all connected networks) or to specific networks connected to it. +## Control how private links apply to your networks +By using private link access modes, you can control how private links affect your network traffic. These settings can apply to your AMPLS object (to affect all connected networks) or to specific networks connected to it. Choosing the proper access mode is critical to ensuring continuous, uninterrupted network traffic. Each of these modes can be set for ingestion and queries, separately: -* Private Only - allows the VNet to reach only Private Link resources (resources in the AMPLS). That's the most secure mode of work, preventing data exfiltration. To achieve that, traffic to Azure Monitor resources out of the AMPLS is blocked. - -* Open - allows the VNet to reach both Private Link resources and resources not in the AMPLS (if they [accept traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). While the Open access mode doesn't prevent data exfiltration, it still offers the other benefits of Private Links - traffic to Private Link resources is sent through private endpoints, validated, and sent over the Microsoft backbone. The Open mode is useful for a mixed mode of work (accessing some resources publicly and others over a Private Link), or during a gradual onboarding process. - +* **Private Only**: Allows the virtual network to reach only private link resources (resources in the AMPLS). That's the most secure mode of work. It prevents data exfiltration by blocking traffic out of the AMPLS to Azure Monitor resources. + +* **Open**: Allows the virtual network to reach both private link resources and resources not in the AMPLS (if they [accept traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). The Open access mode doesn't prevent data exfiltration, but it still offers the other benefits of private links. Traffic to private link resources is sent through private endpoints, validated, and sent over the Microsoft backbone. The Open mode is useful for a mixed mode of work (accessing some resources publicly and others over a private link) or during a gradual onboarding process. + Access modes are set separately for ingestion and queries. For example, you can set the Private Only mode for ingestion and the Open mode for queries. -Apply caution when selecting your access mode. Using the Private Only access mode will block traffic to resources not in the AMPLS across all networks that share the same DNS, regardless of subscription or tenant (with the exception of Log Analytics ingestion requests, as explained below). If you can't add all Azure Monitor resources to the AMPLS, start with by adding select resources and applying the Open access mode. Only after adding *all* Azure Monitor resources to your AMPLS, switch to the 'Private Only' mode for maximum security. +Apply caution when you select your access mode. Using the Private Only access mode will block traffic to resources not in the AMPLS across all networks that share the same DNS, regardless of subscription or tenant. The exception is Log Analytics ingestion requests, which is explained. If you can't add all Azure Monitor resources to the AMPLS, start by adding select resources and applying the Open access mode. Switch to the Private Only mode for maximum security *only after you've added all Azure Monitor resources to your AMPLS*. -See [Use APIs and command line](./private-link-configure.md#use-apis-and-command-line) for configuration details and examples. +For configuration details and examples, see [Use APIs and the command line](./private-link-configure.md#use-apis-and-the-command-line). > [!NOTE]-> Log Analytics ingestion uses resource-specific endpoints. As such, it doesnΓÇÖt adhere to AMPLS access modes. **To assure Log Analytics ingestion requests canΓÇÖt access workspaces out of the AMPLS, set the network firewall to block traffic to public endpoints, regardless of the AMPLS access modes**. +> Log Analytics ingestion uses resource-specific endpoints. As such, it doesn't adhere to AMPLS access modes. To assure Log Analytics ingestion requests can't access workspaces out of the AMPLS, set the network firewall to block traffic to public endpoints, regardless of the AMPLS access modes. -### Setting access modes for specific networks +### Set access modes for specific networks The access modes set on the AMPLS resource affect all networks, but you can override these settings for specific networks. -In the following diagram, VNet1 uses the Open mode and VNet2 uses the Private Only mode. As a result, requests from VNet1 can reach Workspace1 and Component2 over a Private Link, and Component3 not over a Private Link (if it [accepts traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). However, VNet2 requests won't be able to reach Component3. - -+In the following diagram, VNet1 uses the Open mode and VNet2 uses the Private Only mode. Requests from VNet1 can reach Workspace 1 and Component 2 over a private link. Requests can reach Component 3 only if it [accepts traffic from public networks](./private-link-design.md#control-network-access-to-your-resources). VNet2 requests can't reach Component 3. + ## Consider AMPLS limits The AMPLS object has the following limits:-* A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to. -* An AMPLS object can connect to 300 Log Analytics workspaces and 1000 Application Insights components at most. -* An Azure Monitor resource (Workspace or Application Insights component or [Data Collection Endpoint](../essentials/data-collection-endpoint-overview.md)) can connect to 5 AMPLSs at most. -* An AMPLS object can connect to 10 Private Endpoints at most. +* A virtual network can connect to only *one* AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources to which the virtual network should have access. +* An AMPLS object can connect to 300 Log Analytics workspaces and 1,000 Application Insights components at most. +* An Azure Monitor resource (workspace or Application Insights component or [data collection endpoint](../essentials/data-collection-endpoint-overview.md)) can connect to five AMPLSs at most. +* An AMPLS object can connect to 10 private endpoints at most. > [!NOTE] > AMPLS resources created before December 1, 2021, support only 50 resources. -In the below diagram: -* Each VNet connects to only **one** AMPLS object. -* AMPLS A connects to two workspaces and one Application Insight component, using 2 of the possible 300 Log Analytics workspaces and 1 of the possible 1000 Application Insights components it can connect to. -* Workspace2 connects to AMPLS A and AMPLS B, using two of the five possible AMPLS connections. -* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using two of the 10 possible Private Endpoint connections. -- +In the following diagram: +* Each virtual network connects to only *one* AMPLS object. +* AMPLS A connects to two workspaces and one Application Insight component by using two of the possible 300 Log Analytics workspaces and one of the possible 1,000 Application Insights components it can connect to. +* Workspace 2 connects to AMPLS A and AMPLS B by using two of the five possible AMPLS connections. +* AMPLS B is connected to private endpoints of two virtual networks (VNet2 and VNet3) by using two of the 10 possible private endpoint connections. + ## Control network access to your resources Your Log Analytics workspaces or Application Insights components can be set to: * Accept or block ingestion from public networks (networks not connected to the resource AMPLS). * Accept or block queries from public networks (networks not connected to the resource AMPLS). -That granularity allows you to set access according to your needs, per workspace. For example, you may accept ingestion only through Private Link connected networks (meaning specific VNets), but still choose to accept queries from all networks, public and private. +That granularity allows you to set access according to your needs, per workspace. For example, you might accept ingestion only through private link-connected networks (meaning specific virtual networks) but still choose to accept queries from all networks, public and private. -Blocking queries from public networks means clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes logs, metrics, and the live metrics stream. Blocking queries from public networks affects all experiences that run these queries, such as workbooks, dashboards, Insights in the Azure portal, and queries run from outside the Azure portal. +Blocking queries from public networks means clients like machines and SDKs outside of the connected AMPLSs can't query data in the resource. That data includes logs, metrics, and the live metrics stream. Blocking queries from public networks affects all experiences that run these queries, such as workbooks, dashboards, insights in the Azure portal, and queries run from outside the Azure portal. -Your [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md) can be set to: -* Accept or block access from public networks (networks not connected to the resource AMPLS). +Your [data collection endpoints](../essentials/data-collection-endpoint-overview.md) can be set to accept or block access from public networks (networks not connected to the resource AMPLS). -See [Set resource access flags](./private-link-configure.md#set-resource-access-flags) for configuration details. +For configuration information, see [Set resource access flags](./private-link-configure.md#set-resource-access-flags). ### Exceptions +Note the following exceptions. + #### Diagnostic logs-Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel and are not controlled by these settings. +Logs and metrics uploaded to a workspace via [diagnostic settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel and aren't controlled by these settings. -#### 'Custom Metrics' or Azure Monitor guest metrics -[Custom Metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via the Azure Monitor Agent are not controlled by Data Collection endpoints nor can they be configured over private links. +#### Custom metrics or Azure Monitor guest metrics +[Custom metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via Azure Monitor Agent aren't controlled by data collection endpoints. They can't be configured over private links. #### Azure Resource Manager-Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md) +Restricting access as previously explained applies to data in the resource. However, configuration changes like turning these access settings on or off are managed by Azure Resource Manager. To control these settings, restrict access to resources by using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor roles, permissions, and security](../roles-permissions-security.md). > [!NOTE]-> Queries sent through the Azure Resource Management (ARM) API can't use Azure Monitor Private Links. These queries can only go through if the target resource allows queries from public networks (set through the Network Isolation pane, or [using the CLI](./private-link-configure.md#set-resource-access-flags)). +> Queries sent through the Resource Manager API can't use Azure Monitor private links. These queries can only go through if the target resource allows queries from public networks (set through the **Network Isolation** pane or [by using the CLI](./private-link-configure.md#set-resource-access-flags)). >-> The following experiences are known to run queries through the ARM API: +> The following experiences are known to run queries through the Resource Manager API: > * LogicApp connector > * Update Management solution > * Change Tracking solution > * VM Insights > * Container Insights-> * Log Analytics' Workspace Summary pane (showing the solutions dashboard) +> * Log Analytics **Workspace Summary** pane (that shows the solutions dashboard) ## Application Insights considerations-* YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md). -* Non-portal consumption experiences must also run on the private-linked VNET that includes the monitored workloads. -* In order to support Private Links for Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md) +* You'll need to add resources hosting the monitored workloads to a private link. For example, see [Using private endpoints for Azure Web App](../../app-service/networking/private-endpoint.md). +* Non-portal consumption experiences must also run on the private-linked virtual network that includes the monitored workloads. +* To support private links for the Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md). > [!NOTE]-> To fully secure workspace-based Application Insights, you need to lock down both access to Application Insights resource as well as the underlying Log Analytics workspace. +> To fully secure workspace-based Application Insights, you need to lock down access to the Application Insights resource and the underlying Log Analytics workspace. ## Log Analytics considerations +Note the following Log Analytics considerations. + ### Log Analytics solution packs download-Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds) can reach the agents' solution packs storage over the private link. This capability is made possible through a DNS zone created for 'blob.core.windows.net'. +Log Analytics agents need to access a global storage account to download solution packs. Private link setups created at or after April 19, 2021 (or starting June 2021 on Azure sovereign clouds) can reach the agents' solution packs storage over the private link. This capability is made possible through a DNS zone created for `blob.core.windows.net`. -If your Private Link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that you can either: -* Re-create your AMPLS and the Private Endpoint connected to it -* Allow your agents to reach the storage account through its public endpoint, by adding the following rules to your firewall allowlist: +If your private link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that, you can either: +* Re-create your AMPLS and the private endpoint connected to it. +* Allow your agents to reach the storage account through its public endpoint by adding the following rules to your firewall allowlist: - | Cloud environment | Agent Resource | Ports | Direction | + | Cloud environment | Agent resource | Ports | Direction | |:--|:--|:--|:--| |Azure Public | scadvisorcontent.blob.core.windows.net | 443 | Outbound |Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Outbound |Azure China 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Outbound -### Collecting custom logs and IIS log over Private Link -Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. However, to ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspace(s). +### Collect custom logs and IIS log over a private link +Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. To ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspaces. -For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Use Private Links](private-storage.md#use-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace). +For more information on how to connect your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Use private links](private-storage.md#use-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace). ### Automation-If you use Log Analytics solutions that require an Automation account (such as Update Management, Change Tracking, or Inventory) you should also create a Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md). +If you use Log Analytics solutions that require an Azure Automation account (such as Update Management, Change Tracking, or Inventory), you should also create a private link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md). > [!NOTE]-> Some products and Azure portal experiences query data through Azure Resource Manager and therefore won't be able to query data over a Private Link, unless Private Link settings are applied to the Resource Manager as well. To overcome this, you can configure your resources to accept queries from public networks as explained in [Controlling network access to your resources](./private-link-design.md#control-network-access-to-your-resources) (Ingestion can remain limited to Private Link networks). -We've identified the following products and experiences query workspaces through Azure Resource +> Some products and Azure portal experiences query data through Resource Manager. In this case, they won't be able to query data over a private link unless private link settings are applied to Resource Manager too. To overcome this restriction, you can configure your resources to accept queries from public networks as explained in [Controlling network access to your resources](./private-link-design.md#control-network-access-to-your-resources). (Ingestion can remain limited to private link networks.) +We've identified the following products and experiences query workspaces through Resource > * LogicApp connector > * Update Management solution > * Change Tracking solution-> * The Workspace Summary pane in the portal (showing the solutions dashboard) +> * The **Workspace Summary** pane in the portal (that shows the solutions dashboard) > * VM Insights > * Container Insights -- ## Requirements -### Network subnet size -The smallest supported IPv4 subnet is /27 (using CIDR subnet definitions). While Azure VNets [can be as small as /29](../../virtual-network/virtual-networks-faq.md#how-small-and-how-large-can-vnets-and-subnets-be), Azure [reserves 5 IP addresses](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) and the Azure Monitor Private Link setup requires at least 11 additional IP addresses, even if connecting to a single workspace. [Review your endpoint's DNS settings](./private-link-configure.md#reviewing-your-endpoints-dns-settings) for the detailed list of Azure Monitor Private Link endpoints. +Note the following requirements. +### Network subnet size +The smallest supported IPv4 subnet is /27 (using CIDR subnet definitions). Although Azure virtual networks [can be as small as /29](../../virtual-network/virtual-networks-faq.md#how-small-and-how-large-can-vnets-and-subnets-be), Azure [reserves five IP addresses](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). The Azure Monitor private link setup requires at least 11 more IP addresses, even if you're connecting to a single workspace. [Review your endpoint's DNS settings](./private-link-configure.md#review-your-endpoints-dns-settings) for the list of Azure Monitor private link endpoints. ### Agents The latest versions of the Windows and Linux agents must be used to support secure ingestion to Log Analytics workspaces. Older versions can't upload monitoring data over a private network. -**Azure Monitor Windows agents** +#### Azure Monitor Windows agents -Azure Monitor Windows agent version 1.1.1.0 or higher (using [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)) +Azure Monitor Windows agent version 1.1.1.0 or higher (by using [data collection endpoints](../essentials/data-collection-endpoint-overview.md)). -**Azure Monitor Linux agents** +#### Azure Monitor Linux agents -Azure Monitor Windows agent version 1.10.5.0 or higher (using [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)) +Azure Monitor Windows agent version 1.10.5.0 or higher (by using [data collection endpoints](../essentials/data-collection-endpoint-overview.md)). -**Log Analytics Windows agent (on deprecation path)** +#### Log Analytics Windows agent (on deprecation path) Use the Log Analytics agent version 10.20.18038.0 or later. -**Log Analytics Linux agent (on deprecation path)** +#### Log Analytics Linux agent (on deprecation path) -Use agent version 1.12.25 or later. If you can't, run the following commands on your VM. +Use agent version 1.12.25 or later. If you can't, run the following commands on your VM: ```cmd $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key> ```+ ### Azure portal-To use Azure Monitor portal experiences such as Application Insights, Log Analytics and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md), you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your Network Security Group. +To use Azure Monitor portal experiences such as Application Insights, Log Analytics, and [data collection endpoints](../essentials/data-collection-endpoint-overview.md), you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your network security group. ### Programmatic access-To use the REST API, [CLI](/cli/azure/monitor) or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall. +To use the REST API, the Azure [CLI](/cli/azure/monitor), or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall. ### Application Insights SDK downloads from a content delivery network-Bundle the JavaScript code in your script so that the browser doesn't attempt to download code from a CDN. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup) +Bundle the JavaScript code in your script so that the browser doesn't attempt to download code from a CDN. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup). ### Browser DNS settings-If you're connecting to your Azure Monitor resources over a Private Link, traffic to these resources must go through the private endpoint that is configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](./private-link-configure.md#connect-to-a-private-endpoint). Some browsers use their own DNS settings instead of the ones you set. The browser might attempt to connect to Azure Monitor public endpoints and bypass the Private Link entirely. Verify that your browsers settings don't override or cache old DNS settings. +If you're connecting to your Azure Monitor resources over a private link, traffic to these resources must go through the private endpoint that's configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](./private-link-configure.md#connect-to-a-private-endpoint). Some browsers use their own DNS settings instead of the ones you set. The browser might attempt to connect to Azure Monitor public endpoints and bypass the private link entirely. Verify that your browser settings don't override or cache old DNS settings. ### Querying limitation: externaldata operator-The [`externaldata` operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a Private Link, as it reads data from storage accounts but doesn't guarantee the storage is accessed privately. +The [`externaldata` operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a private link because it reads data from storage accounts but doesn't guarantee the storage is accessed privately. ## Next steps-- Learn how to [configure your Private Link](private-link-configure.md)-- Learn about [private storage](private-storage.md) for Custom Logs and Customer managed keys (CMK)-- Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md)+- Learn how to [configure your private link](private-link-configure.md). +- Learn about [private storage](private-storage.md) for custom logs and customer-managed keys. +- Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md). |
azure-monitor | Private Link Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md | Last updated 1/5/2022 # Use Azure Private Link to connect networks to Azure Monitor -With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. An Azure Monitor Private Link connects a private endpoint to a set of Azure Monitor resources, defining the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope (AMPLS). +With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. An Azure Monitor private link connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope (AMPLS). > [!NOTE]-> Azure Monitor Private Links are structured differently from Private Links to other services you may use. Instead of creating multiple Private Links, one for each resource the VNet connects to, Azure Monitor uses a single Private Link connection, from the VNet to an Azure Monitor Private Link Scope (AMPLS). AMPLS is the set of all Azure Monitor resources to which VNet connects through a Private Link. -+> Azure Monitor private links are structured differently from private links to other services you might use. Instead of creating multiple private links, one for each resource the virtual network connects to, Azure Monitor uses a single private link connection, from the virtual network to an AMPLS. AMPLS is the set of all Azure Monitor resources to which a virtual network connects through a private link. ## Advantages With Private Link you can: -- Connect privately to Azure Monitor without opening up any public network access-- Ensure your monitoring data is only accessed through authorized private networks-- Prevent data exfiltration from your private networks by defining specific Azure Monitor resources that connect through your private endpoint-- Securely connect your private on-premises network to Azure Monitor using ExpressRoute and Private Link-- Keep all traffic inside the Microsoft Azure backbone network+- Connect privately to Azure Monitor without opening up any public network access. +- Ensure your monitoring data is only accessed through authorized private networks. +- Prevent data exfiltration from your private networks by defining specific Azure Monitor resources that connect through your private endpoint. +- Securely connect your private on-premises network to Azure Monitor by using Azure ExpressRoute and Private Link. +- Keep all traffic inside the Azure backbone network. -For more information, see [Key Benefits of Private Link](../../private-link/private-link-overview.md#key-benefits). +For more information, see [Key benefits of Private Link](../../private-link/private-link-overview.md#key-benefits). -## How it works: main principles -An Azure Monitor Private Link connects a Private Endpoint to a set of Azure Monitor resources - Log Analytics workspaces and Application Insights resources. That set is called an Azure Monitor Private Link Scope (AMPLS). +## How it works: Main principles +An Azure Monitor private link connects a private endpoint to a set of Azure Monitor resources made up of Log Analytics workspaces and Application Insights resources. That set is called an Azure Monitor Private Link Scope. - + -* Using Private IPs - the Private Endpoint on your VNet allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your VNet to unrequired outbound traffic. -* Running on the Azure backbone - traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks. -* Control which Azure Monitor resources can be reached - configure your Azure Monitor Private Link Scope to your preferred access mode - either allowing traffic only to Private Link resources, or to both Private Link and non-Private-Link resources (resources out of the AMPLS). -* Control networks access to your Azure Monitor resources - configure each of your workspaces or components to accept or block traffic from public networks. You can apply different settings for ingestion and query requests. +An AMPLS: +* **Uses private IPs**: The private endpoint on your virtual network allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using the public IPs of these endpoints. For this reason, you can keep using your Azure Monitor resources without opening your virtual network to unrequired outbound traffic. +* **Runs on the Azure backbone**: Traffic from the private endpoint to your Azure Monitor resources will go over the Azure backbone and not be routed to public networks. +* **Controls which Azure Monitor resources can be reached**: Configure your AMPLS to your preferred access mode. You can either allow traffic only to Private Link resources or to both Private Link and non-Private-Link resources (resources out of the AMPLS). +* **Controls network access to your Azure Monitor resources**: Configure each of your workspaces or components to accept or block traffic from public networks. You can apply different settings for ingestion and query requests. -## Azure Monitor Private Links rely on your DNS -When you set up a Private Link connection, your DNS zones map Azure Monitor endpoints to private IPs in order to send traffic through the Private Link. Azure Monitor uses both resource-specific endpoints and shared global / regional endpoints to reach the workspaces and components in your AMPLS. +## Azure Monitor private links rely on your DNS +When you set up a private link connection, your DNS zones map Azure Monitor endpoints to private IPs to send traffic through the private link. Azure Monitor uses both resource-specific endpoints and shared global/regional endpoints to reach the workspaces and components in your AMPLS. > [!WARNING]-> Because Azure Monitor uses some shared endpoints (meaning endpoints that are not resource-specific), setting up a Private Link even for a single resource changes the DNS configuration affecting traffic to **all resources**. In other words, traffic to all workspaces or components are affected by a single Private Link setup. Read the below for more details. --The use of shared endpoints also means you should use a single AMPLS for all networks that share the same DNS. Creating multiple AMPLS resources will cause Azure Monitor DNS zones to override each other, and break existing environments. See [Plan by network topology](./private-link-design.md#plan-by-network-topology) to learn more. +> Because Azure Monitor uses some shared endpoints (meaning endpoints that aren't resource specific), setting up a private link even for a single resource changes the DNS configuration that affects traffic to *all resources*. In other words, traffic to all workspaces or components is affected by a single private link setup. +The use of shared endpoints also means you should use a single AMPLS for all networks that share the same DNS. Creating multiple AMPLS resources will cause Azure Monitor DNS zones to override each other and break existing environments. To learn more, see [Plan by network topology](./private-link-design.md#plan-by-network-topology). ### Shared global and regional endpoints-When configuring Private Link even for a single resource, traffic to the below endpoints will be sent through the allocated Private IPs. --* All Application Insights endpoints - endpoints handling ingestion, live metrics, profiler, debugger etc. to Application Insights endpoints are global. -* The Query endpoint - the endpoint handling queries to both Application Insights and Log Analytics resources is global. +When you configure Private Link even for a single resource, traffic to the following endpoints will be sent through the allocated private IPs: +* **All Application Insights endpoints**: Endpoints handling ingestion, live metrics, the Profiler, and the debugger to Application Insights endpoints are global. +* **The query endpoint**: The endpoint handling queries to both Application Insights and Log Analytics resources is global. > [!IMPORTANT]-> Creating a Private Link affects traffic to **all** monitoring resources, not only resources in your AMPLS. Effectively, it will cause all query requests as well as ingestion to Application Insights components to go through private IPs. However, it does not mean the Private Link validation applies to all these requests.</br> -> Resources not added to the AMPLS can only be reached if the AMPLS access mode is 'Open' and the target resource accepts traffic from public networks. While using the private IP, **Private Link validations don't apply to resources not in the AMPLS**. See [Private Link access modes](#private-link-access-modes-private-only-vs-open) to learn more. +> Creating a private link affects traffic to *all* monitoring resources, not only resources in your AMPLS. Effectively, it will cause all query requests and ingestion to Application Insights components to go through private IPs. It doesn't mean the private link validation applies to all these requests.</br> +> +>Resources not added to the AMPLS can only be reached if the AMPLS access mode is Open and the target resource accepts traffic from public networks. When you use the private IP, *private link validations don't apply to resources not in the AMPLS*. To learn more, see [Private Link access modes](#private-link-access-modes-private-only-vs-open). ### Resource-specific endpoints-Log Analytics endpoints are workspace-specific, except for the query endpoint discussed earlier. As a result, adding a specific Log Analytics workspace to the AMPLS will send ingestion requests to this workspace over the Private Link, while ingestion to other workspaces will continue to use the public endpoints. +Log Analytics endpoints are workspace specific, except for the query endpoint discussed earlier. As a result, adding a specific Log Analytics workspace to the AMPLS will send ingestion requests to this workspace over the private link. Ingestion to other workspaces will continue to use the public endpoints. -[Data Collection Endpoints](../essentials/data-collection-endpoint-overview.md) are also resource-specific, and allow you to uniquely configure ingestion settings for collecting guest OS telemetry data from your machines (or set of machines) when using the new [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [Data Collection Rules](../essentials/data-collection-rule-overview.md). Configuring a data collection endpoint for a set of machines does not affect ingestion of guest telemetry from other machines using the new agent. +[Data collection endpoints](../essentials/data-collection-endpoint-overview.md) are also resource specific. You can use them to uniquely configure ingestion settings for collecting guest OS telemetry data from your machines (or set of machines) when you use the new [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) and [data collection rules](../essentials/data-collection-rule-overview.md). Configuring a data collection endpoint for a set of machines doesn't affect ingestion of guest telemetry from other machines that use the new agent. > [!IMPORTANT]-> Starting December 1, 2021, the Private Endpoints DNS configuration will use the Endpoint Compression mechanism, which allocates a single private IP address for all workspaces in the same region. This improves the supported scale (up to 300 workspaces and 1000 components per AMPLS) and reduces the total number of IPs taken from the network's IP pool. +> Starting December 1, 2021, the private endpoints DNS configuration will use the Endpoint Compression mechanism, which allocates a single private IP address for all workspaces in the same region. It improves the supported scale (up to 300 workspaces and 1,000 components per AMPLS) and reduces the total number of IPs taken from the network's IP pool. +## Private Link access modes: Private Only vs. Open +As discussed in [Azure Monitor private links rely on your DNS](#azure-monitor-private-links-rely-on-your-dns), only a single AMPLS resource should be created for all networks that share the same DNS. As a result, organizations that use a single global or regional DNS have a single private link to manage traffic to all Azure Monitor resources, across all global or regional networks. -## Private Link access modes: Private Only vs Open -As discussed in [Azure Monitor Private Link relies on your DNS](#azure-monitor-private-links-rely-on-your-dns), only a single AMPLS resource should be created for all networks that share the same DNS. As a result, organizations that use a single global or regional DNS in fact have a single Private Link to manage traffic to all Azure Monitor resources, across all global, or regional networks. +For private links created before September 2021, that means: -For Private Links created before September 2021, that means - * Log ingestion works only for resources in the AMPLS. Ingestion to all other resources is denied (across all networks that share the same DNS), regardless of subscription or tenant.-* Queries have a more open behavior, allowing query requests to reach even resources not in the AMPLS. The intention here was to avoid breaking customer queries to resources not in the AMPLS, and allow resource-centric queries to return the complete result set. +* Queries have a more open behavior that allows query requests to reach even resources not in the AMPLS. The intention here was to avoid breaking customer queries to resources not in the AMPLS and allow resource-centric queries to return the complete result set. -However, this behavior proved to be too restrictive for some customers (since it breaks ingestion to resources not in the AMPLS) and too permissive for others (since it allows querying resources not in the AMPLS). +This behavior proved to be too restrictive for some customers because it breaks ingestion to resources not in the AMPLS. But it was too permissive for others because it allows querying resources not in the AMPLS. -Therefore, Private Links created starting September 2021 have new mandatory AMPLS settings, that explicitly set how Private Links should affect network traffic. When creating a new AMPLS resource, you're now required to select the desired access modes, for ingestion and queries separately. -* Private Only mode - allows traffic only to Private Link resources -* Open mode - uses Private Link to communicate with resources in the AMPLS, but also allows traffic to continue to other resources as well. See [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks) to learn more. +Starting September 2021, private links have new mandatory AMPLS settings that explicitly set how they should affect network traffic. When you create a new AMPLS resource, you're now required to select the access modes you want for ingestion and queries separately: -> [!NOTE] -> While Log Analytics query requests are affected by the AMPLS access mode setting, Log Analytics ingestion requests use resource-specific endpoints, and are therefore not controlled by the AMPLS access mode. **To assure Log Analytics ingestion requests canΓÇÖt access workspaces out of the AMPLS, set the network firewall to block traffic to public endpoints, regardless of the AMPLS access modes**. +* **Private Only mode**: Allows traffic only to Private Link resources. +* **Open mode**: Uses Private Link to communicate with resources in the AMPLS, but also allows traffic to continue to other resources. To learn more, see [Control how private links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks). ++Although Log Analytics query requests are affected by the AMPLS access mode setting, Log Analytics ingestion requests use resource-specific endpoints and aren't controlled by the AMPLS access mode. To ensure Log Analytics ingestion requests can't access workspaces out of the AMPLS, set the network firewall to block traffic to public endpoints, regardless of the AMPLS access modes. > [!NOTE]-> If you have configured Log Analytics with Private Link by initially setting the NSG rules to allow outbound traffic by ServiceTag:AzureMonitor, then the connected VMs would send the logs through Public endpoint. Later, if you change the rules to deny outbound traffic by ServiceTag:AzureMonitor, still the connected VMs would keep sending logs until you reboot the VMs or cut the sessions. In order to make sure the desired configuration take immediate effect, the recommendation is to reboot the connected VMs. -> +> If you've configured Log Analytics with Private Link by initially setting the network security group rules to allow outbound traffic by `ServiceTag:AzureMonitor`, the connected VMs send the logs through a public endpoint. Later, if you change the rules to deny outbound traffic by `ServiceTag:AzureMonitor`, the connected VMs keep sending logs until you reboot the VMs or cut the sessions. To make sure the desired configuration takes immediate effect, reboot the connected VMs. +> + ## Next steps-- [Design your Private Link setup](private-link-design.md)-- Learn how to [configure your Private Link](private-link-configure.md)-- Learn about [private storage](private-storage.md) for Custom Logs and Customer managed keys (CMK)+- [Design your Azure Private Link setup](private-link-design.md). +- Learn how to [configure your private link](private-link-configure.md). +- Learn about [private storage](private-storage.md) for custom logs and customer-managed keys. <h3><a id="connect-to-a-private-endpoint"></a></h3> |
azure-netapp-files | Configure Network Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md | Two settings are available for network features: * If the Standard volume capability is supported for the region, the Network Features field of the Create a Volume page defaults to *Standard*. You can change this setting to *Basic*. * If the Standard volume capability is not available for the region, the Network Features field of the Create a Volume page defaults to *Basic*, and you cannot modify the setting. -* The ability to locate storage compatible with the desired type of network features depends on the VNet specified. If you cannot create a volume because of insufficient resources, you can try a different VNet for which compatible storage is available. +* The ability to locate storage compatible with the desired type of network features depends on the VNet specified. If you cannot create a volume because of insufficient resources, you can try a different VNet for which compatible storage is available. * You can create Basic volumes from Basic volume snapshots and Standard volumes from Standard volume snapshots. Creating a Basic volume from a Standard volume snapshot is not supported. Creating a Standard volume from a Basic volume snapshot is not supported. -* Conversion between Basic and Standard networking features in either direction is not currently supported. +* When restoring a backup to a new volume, the new volume can be configure with Basic or Standard network features. ++* Conversion between Basic and Standard network features in either direction is not currently supported. ## Set the Network Features option |
azure-percept | Create And Deploy Manually Azure Precept Devkit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-and-deploy-manually-azure-precept-devkit.md | The following guide is to help customers manually deploy a factory fresh IoT Edg - Highly recommended: Update your Azure Percept DK to the [latest version](./software-releases-usb-cable-updates.md) - Create an Azure account with an IoT Hub - Install [VSCode](https://code.visualstudio.com/Download)-- Install the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension for VSCode+- Install the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) Extension for VSCode - Find the software image version running on your Azure Percept Devkit (see below) ## Identify your Azure Percept DK software version The deployment.json files are a representation of all default modules necessary 1. Download the appropriate deployment.json from [GitHub](https://github.com/microsoft/azure-percept-advanced-development/tree/main/default-configuration) for your reported software version. Refer to the [Identify your Azure Percept DK software version](#identify-your-azure-percept-dk-software-version) section above. 1. For 2021.111.124.xxx and later, use [default-deployment-2112.json](https://github.com/microsoft/azure-percept-advanced-development/blob/main/default-configuration/default-deployment-2112.json) 2. For 2021.109.129.xxx and lower, use [default-deployment-2108.json](https://github.com/microsoft/azure-percept-advanced-development/blob/main/default-configuration/default-deployment-2108.json)-2. Launch VSCode and Sign into Azure. Be sure you've installed the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension. +2. Launch VSCode and Sign into Azure. Be sure you've installed the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) Extension.  |
azure-resource-manager | Bicep Config Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md | Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 11/01/2022 Last updated : 01/30/2023 # Add linter settings in the Bicep config file The following example shows the rules that are available for configuration. "simplify-interpolation": { "level": "warning" },+ "use-parent-property": { + "level": "warning" + }, "use-protectedsettings-for-commandtoexecute-secrets": { "level": "warning" }, |
azure-resource-manager | Child Resource Name Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md | This article show different ways you can declare a child resource. ### Training resources -If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/training/modules/child-extension-bicep-templates). +If you would rather learn about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/training/modules/child-extension-bicep-templates). ## Name and type pattern -In Bicep, you can specify the child resource either within the parent resource or outside of the parent resource. The values you provide for the resource name and resource type vary based on how you declare the child resource. However, the full name and type always resolve to the same pattern. +In Bicep, you can specify the child resource either within the parent resource or outside of the parent resource. The values you provide for the resource name and resource type vary based on how you declare the child resource. However, the full name and type always resolve to the same pattern. The **full name** of the child resource uses the pattern: If you have more than two levels in the hierarchy, keep repeating parent resourc {resource-provider-namespace}/{parent-resource-type}/{child-level1-resource-type}/{child-level2-resource-type} ``` -If you count the segments between `/` characters, the number of segments in the type is always one more than the number of segments in the name. +If you count the segments between `/` characters, the number of segments in the type is always one more than the number of segments in the name. ## Within parent resource You can also use the full resource name and type when declaring the child resour :::code language="bicep" source="~/azure-docs-bicep-samples/syntax-samples/child-resource-name-type/fullnamedeclaration.bicep" highlight="10,11,17,18"::: > [!IMPORTANT]-> Setting the full resource name and type isn't the recommended approach. It's not as type safe as using one of the other approaches. +> Setting the full resource name and type isn't the recommended approach. It's not as type safe as using one of the other approaches. For more information, see [Linter rule: use parent property](./linter-rule-use-parent-property.md). ## Next steps |
azure-resource-manager | Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md | Let's make sure your environment is set up for working with Bicep files. To auth | | [Visual Studio and Bicep extension](#visual-studio-and-bicep-extension) | automatic | | Deploy | [Azure CLI](#azure-cli) | automatic | | | [Azure PowerShell](#azure-powershell) | [manual](#install-manually) |-| | [VS Code and Bicep extension](#vs-code-and-bicep-extension) | automatic | +| | [VS Code and Bicep extension](#vs-code-and-bicep-extension) | [manual](#install-manually) | | | [Air-gapped cloud](#install-on-air-gapped-cloud) | download | ## VS Code and Bicep extension |
azure-resource-manager | Linter Rule Use Parent Property | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-parent-property.md | + + Title: Linter rule - use parent property +description: Linter rule - use parent property + Last updated : 01/30/2023+++# Linter rule - use parent property ++When defined outside of the parent resource, you format name of the child resource with slashes to include the parent name. Setting the full resource name isn't the recommended approach. The syntax can be simplified by using the `parent` property. For more information, see [Full resource name outside parent](./child-resource-name-type.md#full-resource-name-outside-parent). ++When defined outside of the parent resource, you use slashes to include the parent name in the name of the child resource. Setting the full resource name with parent resource name is not recommended. The `parent` property can be used to simplify the syntax. See [Full resource name outside parent](./child-resource-name-type.md#full-resource-name-outside-parent). ++## Linter rule code ++Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings: ++`use-parent-property` ++## Solution ++The following example fails this test because of the name values for `service` and `share`: ++```bicep +param location string = resourceGroup().location ++resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = { + name: 'examplestorage' + location: location + kind: 'StorageV2' + sku: { + name: 'Standard_LRS' + } +} ++resource service 'Microsoft.Storage/storageAccounts/fileServices@2021-02-01' = { + name: 'examplestorage/default' + dependsOn: [ + storage + ] +} ++resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-02-01' = { + name: 'examplestorage/default/exampleshare' + dependsOn: [ + service + ] +} +``` ++You can fix the problem by using the `parent` property: ++```bicep +param location string = resourceGroup().location ++resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = { + name: 'examplestorage' + location: location + kind: 'StorageV2' + sku: { + name: 'Standard_LRS' + } +} ++resource service 'Microsoft.Storage/storageAccounts/fileServices@2021-02-01' = { + parent: storage + name: 'default' +} ++resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-02-01' = { + parent: service + name: 'exampleshare' +} +``` ++You can fix the issue automatically by selecting **Quick Fix** as shown on the following screenshot: +++## Next steps ++For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md | Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 11/01/2022 Last updated : 01/30/2023 # Use Bicep linter The default set of linter rules is minimal and taken from [arm-ttk test cases](. - [secure-params-in-nested-deploy](./linter-rule-secure-params-in-nested-deploy.md) - [secure-secrets-in-params](./linter-rule-secure-secrets-in-parameters.md) - [simplify-interpolation](./linter-rule-simplify-interpolation.md)+- [use-parent-property](./linter-rule-use-parent-property.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md) - [use-recent-api-versions](./linter-rule-use-recent-api-versions.md) - [use-resource-id-functions](./linter-rule-use-resource-id-functions.md) |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | To learn more about limits on a more granular level, such as document size, quer [!INCLUDE [azure-cognitive-services-limits](../../../includes/azure-cognitive-services-limits.md)] +## Azure Communications Gateway limits ++Some of the following default limits and quotas can be increased. To request a change, create a [change request](/azure/communications-gateway/request-changes.md) stating the limit you want to change. +++Azure Communications Gateway also has limits on the SIP signaling. +++ ## Azure Container Apps limits For Azure Container Apps limits, see [Quotas in Azure Container Apps](../../container-apps/quotas.md). |
azure-resource-manager | Tag Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md | For examples of applying tags with SDKs, see: * [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/resourcemanager/Azure.ResourceManager/samples/Sample2_ManagingResourceGroups.md) * [Java](https://github.com/Azure-Samples/resources-java-manage-resource-group/blob/master/src/main/java/com/azure/resourcemanager/resources/samples/ManageResourceGroup.java) * [JavaScript](https://github.com/Azure-Samples/azure-sdk-for-js-samples/blob/main/samples/resources/resources_example.ts)-* [Python](https://github.com/Azure-Samples/resource-manager-python-resources-and-groups) +* [Python](https://github.com/MicrosoftDocs/samples/tree/main/Azure-Samples/azure-samples-python-management/resources) ## Inherit tags |
azure-resource-manager | Error Reserved Resource Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-reserved-resource-name.md | This article describes the error you get when deploying a resource that includes When deploying a resource, you may receive the following error: -``` +```output Code=ReservedResourceName; Message=The resource name <resource-name> or a part of the name is a trademarked or reserved word. ``` ## Cause -Resources that have an accessible endpoint, such as a fully qualified domain name, can't use reserved words or trademarks in the name. The name is checked when the resource is created, even if the endpoint isn't currently enabled. +Resources with an accessible endpoint, such as a fully qualified domain name, can't use reserved words or trademarks in the name. The name is checked when the resource is created, even if the endpoint isn't currently enabled. The following words are reserved: The following words are reserved: - SKYPE - VISIO - VISUALSTUDIO+- XBOX The following words can't be used as either a whole word or a substring in the name: - MICROSOFT - WINDOWS -The following words can't be used at the start of a resource name, but can be used later in the name: +The following word can't be used at the start of a resource name, but can be used later in the name: - LOGIN-- XBOX ## Solution |
azure-web-pubsub | Howto Develop Eventhandler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-eventhandler.md | -The event handler handles the incoming client events. Event handlers are registered and configured in the service through portal or Azure CLI beforehand so that when a client event is triggered, the service can identify if the event is expected to be handled or not. We now support the event handler as the server side, which exposes public accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**. +The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**. -Service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md). +The Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md). -For every event, it formulates an HTTP POST request to the registered upstream and expects an HTTP response. +For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response. The data sending from the service to the server is always in CloudEvents `binary` format. The data sending from the service to the server is always in CloudEvents `binary ## Upstream and Validation -When configuring the webhook endpoint, the URL can use `{event}` parameter to define a URL template. The service calculates the value of the webhook URL dynamically when the client request comes in. For example, when a request `/client/hubs/chat` comes in, with a configured event handler URL pattern `http://host.com/api/{event}` for hub `chat`, when the client connects, it will first POST to this URL: `http://host.com/api/connect`. The parameter can be useful when a PubSub WebSocket client sends custom events, that the event handler helps dispatch different events to different upstream. Note that the `{event}` parameter is not allowed in the URL domain name. +When you configure the webhook endpoint, the URL can include the `{event}` parameter to define a URL template. The service calculates the value of the webhook URL dynamically when the client request comes in. For example, when a request `/client/hubs/chat` comes in, with a configured event handler URL pattern `http://host.com/api/{event}` for hub `chat`, when the client connects, it will first POST to this URL: `http://host.com/api/connect`. The `{event}` parameter can be useful when a PubSub WebSocket client sends custom events, that the event handler helps dispatch different events to different upstream endpoints. The `{event}` parameter isn't allowed in the URL domain name. -When setting up the event handler upstream through Azure portal or CLI, the service follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. Every registered upstream webhook URL will be validated by this mechanism. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response to have a header `WebHook-Allowed-Origin` to contain this domain name or `*`. +When setting up the event handler webhook through Azure portal or CLI, the service follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. Every registered upstream webhook URL will be validated by this mechanism. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response to have a header `WebHook-Allowed-Origin` to contain this domain name or `*`. When doing the validation, the `{event}` parameter is resolved to `validate`. For example, when trying to set the URL to `http://host.com/api/{event}`, the service will try to **OPTIONS** a request to `http://host.com/api/validate`. And only when the response is valid, the configuration can be set successfully. -For now, we do not support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback). +For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback). ## Authentication between service and webhook +You can use any of these methods to authenticate between the service and webhook. + - Anonymous mode - Simple Auth with `?code=<code>` is provided through the configured Webhook URL as query parameter.-- Use Azure Active Directory(Azure AD) authentication, check [here](howto-use-managed-identity.md) for details.- - Step1: Enable Identity for the Web PubSub service - - Step2: Select from existing AAD application that stands for your webhook web app +- Azure Active Directory(Azure AD) authentication. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios). ## Configure event handler ### Configure through Azure portal -Find your Azure Web PubSub service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your server-side webhook URL. For an existing hub configuration, select **...** on right side will navigate to the same editing page. +You can add an event handler to a new hub or edit an existing hub. ++To configure an event handler in a new hub: ++1. Go to your Azure Web PubSub service page in the **Azure portal**. +1. Select **Settings** from the menu. +1. Select **Add** to create a hub and configure your server-side webhook URL. Note: To add an event handler to an existing hub, select the hub and select **Edit**. ++ :::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler."::: +1. Enter your hub name. +1. Select **Add** under **Configure Even Handlers**. +1. In the event handler page, configure the following fields: + 1. Enter the server webhook URL in the **URL Template** field. + 1. Select the **System events** that you want to subscribe to. + 1. Select the **User events** that you want to subscribe to. + 1. Select **Authentication** method to authenticate upstream requests. + 1. Select **Confirm**. -Then in the below editing page, you'd need to configure hub name, server webhook URL, and select `user` and `system` events you'd like to subscribe. Finally select **Save** when everything is done. +1. Select **Save** at the top of the **Configure Hub Settings** page. + :::image type="content" source="media/quickstart-serverless/edit-event-handler.png" alt-text="Screenshot of Azure Web PubSub Configure Hub Settings."::: ### Configure through Azure CLI Use the Azure CLI [**az webpubsub hub**](/cli/azure/webpubsub/hub) group command Commands | Description --|---create | Create hub settings for WebPubSub Service. -delete | Delete hub settings for WebPubSub Service. -list | List all hub settings for WebPubSub Service. -show | Show hub settings for WebPubSub Service. -update | Update hub settings for WebPubSub Service. +`create` | Create hub settings for WebPubSub Service. +`delete` | Delete hub settings for WebPubSub Service. +`list` | List all hub settings for WebPubSub Service. +`show` | Show hub settings for WebPubSub Service. +`update` | Update hub settings for WebPubSub Service. -Below is an example of creating 2 webhook URLs for hub `MyHub` of `MyWebPubSub` resource. +Here's an example of creating two webhook URLs for hub `MyHub` of `MyWebPubSub` resource: ```azurecli-interactive az webpubsub hub create -n "MyWebPubSub" -g "MyResourceGroup" --hub-name "MyHub" --event-handler url-template="http://host.com" user-event-pattern="*" --event-handler url-template="http://host2.com" system-event="connected" system-event="disconnected" auth-type="ManagedIdentity" auth-resource="uri://myUri" az webpubsub hub create -n "MyWebPubSub" -g "MyResourceGroup" --hub-name "MyHub" ## Next steps |
backup | About Azure Vm Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-azure-vm-restore.md | Title: About the Azure Virtual Machine restore process description: Learn how the Azure Backup service restores Azure virtual machines Last updated 12/24/2021++ # About Azure VM restore |
backup | About Restore Microsoft Azure Recovery Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-restore-microsoft-azure-recovery-services.md | description: Learn about the restore options available with the Microsoft Azure Last updated 05/07/2021++ # About restore using the Microsoft Azure Recovery Services (MARS) agent |
backup | Active Directory Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/active-directory-backup-restore.md | Title: Back up and restore Active Directory description: Learn how to back up and restore Active Directory domain controllers. Last updated 07/08/2020++ # Back up and restore Active Directory domain controllers |
backup | Archive Tier Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md | description: Learn about Archive tier support for Azure Backup. Last updated 11/15/2022 - -++ # Overview of Archive tier in Azure Backup |
backup | Azure Backup Architecture For Sap Hana Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md | Title: Azure Backup Architecture for SAP HANA Backup description: Learn about Azure Backup architecture for SAP HANA backup. Last updated 09/07/2022- -++ # Azure Backup architecture for SAP HANA backup |
backup | Azure Backup Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-glossary.md | Title: Azure Backup glossary description: This article defines terms helpful for use with Azure Backup. Last updated 12/21/2020++ # Azure Backup glossary |
backup | Azure Backup Move Vaults Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-move-vaults-across-regions.md | description: In this article, you'll learn how to ensure continued backups after Last updated 09/24/2021 ++ # Back up resources in Recovery Services vault after moving across regions |
backup | Azure Backup Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-pricing.md | Title: Azure Backup pricing description: Learn how to estimate your costs for budgeting Azure Backup pricing. Last updated 06/16/2020++ # Azure Backup pricing |
backup | Azure File Share Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md | description: Provides a summary of support settings and limitations when backing Last updated 10/14/2022 - -++ # Support matrix for Azure file share backup |
backup | Azure Policy Configure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-policy-configure-diagnostics.md | Title: Configure Vault Diagnostics settings at scale description: Configure Log Analytics Diagnostics settings for all vaults in a given scope using Azure Policy Last updated 02/14/2020++ # Configure Vault Diagnostics settings at scale |
backup | Back Up Azure Stack Hyperconverged Infrastructure Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md | Title: Back up Azure Stack HCI virtual machines with MABS description: This article contains the procedures to back up and recover virtual machines using Microsoft Azure Backup Server (MABS). Last updated 02/15/2022- -++ # Back up Azure Stack HCI virtual machines with Azure Backup Server |
backup | Back Up File Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-file-data.md | Title: Back up file data with MABS description: You can back up file data on server and client computers with MABS. Last updated 08/19/2021++ # Back up file data with MABS |
backup | Backup Afs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-afs-cli.md | Title: Back up Azure file shares with Azure CLI description: Learn how to use Azure CLI to back up Azure file shares in the Recovery Services vault Last updated 01/14/2020++ # Back up Azure file shares with Azure CLI |
backup | Backup Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-architecture.md | Title: Architecture Overview description: Provides an overview of the architecture, components, and processes used by the Azure Backup service. Last updated 12/24/2021- -++ # Azure Backup architecture and components |
backup | Backup Azure About Mars | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-about-mars.md | Title: About the MARS Agent description: Learn how the MARS Agent supports the backup scenarios Last updated 11/28/2022- - ++ # About the Microsoft Azure Recovery Services (MARS) agent for Azure Backup |
backup | Backup Azure Afs Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md | description: In this article, learn how to back up an Azure Files file share by Last updated 02/11/2022 - -++ # Back up an Azure file share by using PowerShell |
backup | Backup Azure Arm Restore Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md | description: Restore an Azure virtual machine from a recovery point by using the Last updated 12/06/2022- -++ # How to restore Azure VM data in Azure portal |
backup | Backup Azure Arm Userestapi Backupazurevms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-backupazurevms.md | description: In this article, learn how to configure, initiate, and manage backu Last updated 08/03/2018 ms.assetid: b80b3a41-87bf-49ca-8ef2-68e43c04c1a3++ # Back up an Azure VM using Azure Backup via REST API |
backup | Backup Azure Arm Userestapi Createorupdatepolicy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md | description: In this article, you'll learn how to create and manage backup polic Last updated 06/13/2022 ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870- -++ # Create Azure Recovery Services backup policies using REST API |
backup | Backup Azure Arm Userestapi Createorupdatevault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-createorupdatevault.md | description: In this article, learn how to manage backup and restore operations Last updated 08/21/2018 ms.assetid: e54750b4-4518-4262-8f23-ca2f0c7c0439++ # Create Azure Recovery Services vault using REST API |
backup | Backup Azure Arm Userestapi Managejobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-managejobs.md | description: In this article, learn how to track and manage backup and restore j Last updated 08/03/2018 ms.assetid: b234533e-ac51-4482-9452-d97444f98b38++ # Track backup and restore jobs using REST API |
backup | Backup Azure Arm Userestapi Restoreazurevms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-restoreazurevms.md | description: In this article, learn how to manage restore operations of Azure Vi Last updated 08/26/2021 ms.assetid: b8487516-7ac5-4435-9680-674d9ecf5642++ # Restore Azure Virtual machines using REST API |
backup | Backup Azure Arm Vms Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-vms-prepare.md | Title: Back up Azure VMs in a Recovery Services vault description: Describes how to back up Azure VMs in a Recovery Services vault using the Azure Backup Last updated 09/29/2022- -++ # Back up Azure VMs in a Recovery Services vault |
backup | Backup Azure Auto Enable Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-auto-enable-backup.md | Title: Auto-Enable Backup on VM Creation using Azure Policy description: 'An article describing how to use Azure Policy to auto-enable backup for all VMs created in a given scope' Last updated 10/17/2022- -++ # Auto-Enable Backup on VM Creation using Azure Policy |
backup | Backup Azure Backup Cloud As Tape | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-cloud-as-tape.md | Title: How to replace your tape infrastructure description: Learn how Azure Backup provides tape-like semantics that enable you to back up and restore data in Azure Last updated 04/30/2017++ # Move your long-term storage from tape to the Azure cloud |
backup | Backup Azure Backup Exchange Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-exchange-server.md | description: Learn how to back up an Exchange server to Azure Backup using Syste Last updated 01/31/2019++ # Back up an Exchange server to Azure Backup with System Center 2012 R2 DPM |
backup | Backup Azure Backup Import Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-import-export.md | description: Learn how you can use Azure Backup to send data off the network by Last updated 12/05/2022++ # Offline seeding for MARS using customer-owned disks with Azure Import/Export |
backup | Backup Azure Backup Server Import Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-server-import-export.md | Title: Offline seeding workflow for DPM and MABS using customer-owned disks with description: With Azure Backup, you can send data off the network by using the Azure Import/Export service. This article explains the offline backup workflow for DPM and Azure Backup Server. Last updated 12/05/2022- -++ # Offline seeding for DPM/MABS using customer-owned disks with Azure Import/Export |
backup | Backup Azure Backup Sharepoint Mabs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint-mabs.md | Title: Back up a SharePoint farm to Azure with MABS description: Use Azure Backup Server to back up and restore your SharePoint data. This article provides the information to configure your SharePoint farm so that desired data can be stored in Azure. You can restore protected SharePoint data from disk or from Azure. Last updated 11/29/2022- - ++ # Back up a SharePoint farm to Azure using Microsoft Azure Backup Server |
backup | Backup Azure Backup Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint.md | Title: Back up a SharePoint farm to Azure with DPM description: This article provides an overview of DPM/Azure Backup server protection of a SharePoint farm to Azure Last updated 10/27/2022- - ++ # Back up a SharePoint farm to Azure with Data Protection Manager |
backup | Backup Azure Data Protection Use Rest Api Backup Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-data-protection-use-rest-api-backup-postgresql.md | Title: Back up Azure PostgreSQL databases using Azure data protection REST API description: In this article, learn how to configure, initiate, and manage backup operations of Azure PostgreSQL databases using REST API. Last updated 01/24/2022- - ms.assetid: 55fa0a81-018f-4843-bef8-609a44c97dcd++ # Back up Azure PostgreSQL databases using Azure data protection via REST API |
backup | Backup Azure Data Protection Use Rest Api Create Update Postgresql Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-data-protection-use-rest-api-create-update-postgresql-policy.md | Title: Create backup policies for Azure PostgreSQL databases using data protecti description: In this article, you'll learn how to create and manage backup policies for Azure PostgreSQL databases using REST API. Last updated 01/24/2022- ms.assetid: 759ee63f-148b-464c-bfc4-c9e640b7da6b++ # Create Azure Data Protection backup policies for Azure PostgreSQL databases using REST API |
backup | Backup Azure Database Postgresql Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-overview.md | Title: About Azure Database for PostgreSQL backup description: An overview on Azure Database for PostgreSQL backup Last updated 01/24/2022- -++ # About Azure Database for PostgreSQL backup |
backup | Backup Azure Database Postgresql Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-support-matrix.md | description: Provides a summary of support settings and limitations of Azure Dat Last updated 01/24/2022 - -++ # Azure Database for PostgreSQL server support matrix |
backup | Backup Azure Enhanced Soft Delete About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md | description: This article gives an overview of enhanced soft delete for Azure Ba Last updated 12/13/2022- -++ # About Enhanced soft delete for Azure Backup (preview) The key benefits of enhanced soft delete are: - **Configurable soft delete retention**: You can now specify the retention duration for deleted backup data, ranging from *14* to *180* days. By default, the retention duration is set to *14* days (as per basic soft delete) for the vault, and you can extend it as required. >[!Note]- >The soft delete doesn't cost you for first 14 days of retention; however, you're charged for the period beyond 14 days. [Learn more](#states-of-soft-delete-settings). + >The soft delete doesn't cost you for 14 days of retention; however, you're charged for the period beyond 14 days. [Learn more](#pricing). - **Re-registration of soft deleted items**: You can now register the items in soft deleted state with another vault. However, you can't register the same item with two vaults for active backups. - **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers. - **Soft delete across workloads**: Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups. |
bastion | Kerberos Authentication Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md | In this section, the following steps help you modify your virtual network and ex ## To verify Bastion is using Kerberos +> [!NOTE] +> You must use the User Principal Name (UPN) to sign in using Kerberos. + Once you have enabled Kerberos on your Bastion resource, you can verify that it's actually using Kerberos for authentication to the target domain-joined VM. 1. Sign into the target VM (either via Bastion or not). Search for "Edit Group Policy" from the taskbar and open the **Local Group Policy Editor**. Once you have enabled Kerberos on your Bastion resource, you can verify that it' 1. End the VM session. 1. Connect to the target VM again using Bastion. Sign-in should succeed, indicating that Bastion used Kerberos (and not NTLM) for authentication. +## Quickstart: Setup Bastion with Kerberos - Resource Manager template ++### Review the template ++``` +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "location": { + "defaultValue": "[resourceGroup().location]", + "type": "string" + }, + "defaultNsgName": { + "type": "string", + "defaultValue": "Default-nsg" + }, + "VnetName": { + "type": "string", + "defaultValue": "myVnet" + }, + "ClientVMName": { + "defaultValue": "Client-vm", + "type": "string" + }, + "ServerVMName": { + "defaultValue": "Server-vm", + "type": "string" + }, + "vmsize": { + "defaultValue": "Standard_DS1_v2", + "type": "string", + "metadata": { + "description": "VM SKU to deploy" + } + }, + "ServerVMUsername": { + "type": "string", + "defaultValue": "serveruser", + "metadata": { + "description": "Admin username on all VMs." + } + }, + "ServerVMPassword": { + "type": "securestring", + "metadata": { + "description": "Admin password on all VMs." + } + }, + "SafeModeAdministratorPassword": { + "type": "securestring", + "metadata": { + "description": "See https://learn.microsoft.com/en-us/powershell/module/addsdeployment/install-addsdomaincontroller?view=windowsserver2022-ps#-safemodeadministratorpassword" + } + }, + "ClientVMUsername": { + "type": "string", + "defaultValue": "clientuser", + "metadata": { + "description": "username on ClientVM." + } + }, + "ClientVMPassword": { + "type": "securestring", + "metadata": { + "description": "password on ClientVM." + } + }, + "ServerVmImage": { + "type": "object", + "defaultValue": { + "offer": "WindowsServer", + "publisher": "MicrosoftWindowsServer", + "sku": "2019-Datacenter", + "version": "latest" + } + }, + "ClientVmImage": { + "type": "object", + "defaultValue": { + "offer": "Windows", + "publisher": "microsoftvisualstudio", + "sku": "Windows-10-N-x64", + "version": "latest" + } + }, + "publicIPAllocationMethod": { + "type": "string", + "defaultValue": "Static" + }, + "BastionName": { + "defaultValue": "Bastion", + "type": "string" + }, + "BastionPublicIPName": { + "defaultValue": "Bastion-ip", + "type": "string" + } + }, + "variables": { + "DefaultSubnetId": "[concat(resourceId('Microsoft.Network/virtualNetworks', parameters('VnetName')), '/subnets/default')]", + "ClientVMSubnetId": "[concat(resourceId('Microsoft.Network/virtualNetworks', parameters('VnetName')), '/subnets/clientvm-subnet')]", + "DNSServerIpAddress": "10.16.0.4", + "ClientVMPrivateIpAddress": "10.16.1.4" + }, + "resources": [ + { + "apiVersion": "2020-03-01", + "name": "[parameters('VnetName')]", + "type": "Microsoft.Network/virtualNetworks", + "location": "[parameters('location')]", + "properties": { + "dhcpOptions": { + "dnsServers": [ "[variables('DNSServerIpAddress')]" ] + }, + "subnets": [ + { + "name": "default", + "properties": { + "addressPrefix": "10.16.0.0/24" + } + }, + { + "name": "clientvm-subnet", + "properties": { + "addressPrefix": "10.16.1.0/24" + } + }, + { + "name": "AzureBastionSubnet", + "properties": { + "addressPrefix": "10.16.2.0/24" + } + } + ], + "addressSpace": { + "addressPrefixes": [ + "10.16.0.0/16" + ] + } + } + }, + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2018-10-01", + "name": "[concat(parameters('ServerVMName'), 'Nic')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[concat('Microsoft.Network/virtualNetworks/', parameters('VnetName'))]" + ], + "properties": { + "ipConfigurations": [ + { + "name": "[concat(parameters('ServerVMName'), 'NicIpConfig')]", + "properties": { + "privateIPAllocationMethod": "Static", + "privateIPAddress": "[variables('DNSServerIpAddress')]", + "subnet": { + "id": "[variables('DefaultSubnetId')]" + } + } + } + ] + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2020-06-01", + "name": "[parameters('ServerVMName')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('ServerVMName'), 'Nic')]" + ], + "properties": { + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "osProfile": { + "AdminUsername": "[parameters('ServerVMUsername')]", + "AdminPassword": "[parameters('ServerVMPassword')]", + "computerName": "[parameters('ServerVMName')]" + }, + "storageProfile": { + "imageReference": "[parameters('ServerVmImage')]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + } + } + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[ResourceId('Microsoft.Network/networkInterfaces/', concat(parameters('ServerVMName'), 'Nic'))]" + } + ] + } + } + }, + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2021-04-01", + "name": "[concat(parameters('ServerVMName'),'/', 'PromoteToDomainController')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',parameters('ServerVMName'))]" + ], + "properties": { + "publisher": "Microsoft.Compute", + "type": "CustomScriptExtension", + "typeHandlerVersion": "1.7", + "autoUpgradeMinorVersion": true, + "settings": { + "commandToExecute": "[concat('powershell.exe -Command \"Install-windowsfeature AD-domain-services; Import-Module ADDSDeployment;$Secure_String_Pwd = ConvertTo-SecureString ',parameters('SafeModeAdministratorPassword'),' -AsPlainText -Force; Install-ADDSForest -DomainName \"bastionkrb.test\" -SafeModeAdministratorPassword $Secure_String_Pwd -Force:$true')]" + } + } + }, + { + "type": "Microsoft.Network/networkInterfaces", + "apiVersion": "2018-10-01", + "name": "[concat(parameters('ClientVMName'), 'Nic')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[concat('Microsoft.Network/virtualNetworks/', parameters('VnetName'))]", + "[concat('Microsoft.Compute/virtualMachines/', parameters('ServerVMName'))]" + ], + "properties": { + "ipConfigurations": [ + { + "name": "[concat(parameters('ClientVMName'), 'NicIpConfig')]", + "properties": { + "privateIPAllocationMethod": "Static", + "privateIPAddress": "[variables('ClientVMPrivateIpAddress')]", + "subnet": { + "id": "[variables('ClientVMSubnetId')]" + } + } + } + ] + } + }, + { + "type": "Microsoft.Compute/virtualMachines", + "apiVersion": "2020-06-01", + "name": "[parameters('ClientVMName')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('ClientVMName'), 'Nic')]" + ], + "properties": { + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "osProfile": { + "AdminUsername": "[parameters('ClientVMUsername')]", + "AdminPassword": "[parameters('ClientVMPassword')]", + "computerName": "[parameters('ClientVMName')]" + }, + "storageProfile": { + "imageReference": "[parameters('ClientVmImage')]", + "osDisk": { + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + } + } + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[ResourceId('Microsoft.Network/networkInterfaces/', concat(parameters('ClientVMName'), 'Nic'))]" + } + ] + } + } + }, + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "apiVersion": "2021-04-01", + "name": "[concat(parameters('ClientVMName'),'/', 'DomainJoin')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/',parameters('ClientVMName'))]", + "[concat('Microsoft.Compute/virtualMachines/', parameters('ServerVMName'),'/extensions/', 'PromoteToDomainController')]", + "[concat('Microsoft.Network/bastionHosts/', parameters('BastionName'))]" + ], + "properties": { + "publisher": "Microsoft.Compute", + "type": "CustomScriptExtension", + "typeHandlerVersion": "1.7", + "autoUpgradeMinorVersion": true, + "settings": { + "commandToExecute": "[concat('powershell.exe -Command Set-ItemProperty -Path HKLM:\\SYSTEM\\CurrentControlSet\\Control\\Lsa\\MSV1_0\\ -Name RestrictReceivingNTLMTraffic -Value 1; $Pass= ConvertTo-SecureString -String ',parameters('ServerVMPassword'),' -AsPlainText -Force; $Credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList \"AD\\serveruser\", $Pass; do { try { $joined = add-computer -computername Client-vm -domainname bastionkrb.test ΓÇôcredential $Credential -passthru -restart ΓÇôforce; } catch {}} while ($joined.HasSucceeded -ne $true)')]" + } + } + }, + { + "apiVersion": "2020-11-01", + "type": "Microsoft.Network/publicIPAddresses", + "name": "[parameters('BastionPublicIPName')]", + "location": "[resourceGroup().location]", + "sku": { + "name": "Standard" + }, + "properties": { + "publicIPAllocationMethod": "Static" + }, + "tags": {} + }, + { + "type": "Microsoft.Network/bastionHosts", + "apiVersion": "2020-11-01", + "name": "[parameters('BastionName')]", + "location": "[resourceGroup().location]", + "dependsOn": [ + "[concat('Microsoft.Network/virtualNetworks/', parameters('VnetName'))]", + "[concat('Microsoft.Network/publicIpAddresses/', parameters('BastionPublicIPName'))]" + ], + "sku": { + "name": "Standard" + }, + "properties": { + "enableKerberos": "true", + "ipConfigurations": [ + { + "name": "IpConf", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIpAddresses', parameters('BastionPublicIPName'))]" + }, + "subnet": { + "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', parameters('VnetName')), '/subnets/AzureBastionSubnet')]" + } + } + } + ] + } + } + ] +} +``` ++The following resources have been defined in the template: +- Deploys the following Azure resources: + - [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks): create an Azure virtual network. + - [**Microsoft.Network/bastionHosts**](/azure/templates/microsoft.network/bastionHosts): create a Standard SKU Bastion with a public IP and Kerberos feature enabled + - Create a Windows 10 ClientVM and a Windows Server 2019 ServerVM +- Have the DNS Server of the VNET point to the private IP address of the ServerVM (domain controller). +- Runs a Custom Script Extension on the ServerVM to promote it to a domain controller with domain name: `bastionkrb.test`. +- Runs a Custom Script Extension on the ClientVM to have it: + - **Restrict NTLM: Incoming NTLM traffic** = Deny all domain accounts (this is to ensure Kerberos is used for authentication). + - Domain-join the `bastionkrb.test` domain. ++## Deploy the template +To setup Kerberos, deploy the ARM template above by running the following PS cmd: +``` +New-AzResourceGroupDeployment -ResourceGroupName <your-rg-name> -TemplateFile "<path-to-template>\KerberosDeployment.json"` +``` +## Review deployed resources +Now, login to ClientVM using Bastion with Kerberos authentication: +- credentials: username = `serveruser@bastionkrb.test` and password = `<password-entered-during-deployment>`. ++ ## Next steps For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) |
chaos-studio | Chaos Studio Tutorial Aks Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md | Azure Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-sour > AKS Chaos Mesh faults are only supported on Linux node pools. ## Limitations-- At present Chaos Mesh faults donΓÇÖt work with private clusters.++- Previously, Chaos Mesh faults didn't work with private clusters. You can now use Chaos Mesh faults with private clusters by configuring [VNet Injection in Chaos Studio](chaos-studio-private-networking.md). ## Set up Chaos Mesh on your AKS cluster |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates ## January 2023 Guest OS ->[!NOTE] -->The January Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the January Guest OS. This list is subject to change. --| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | -| | | | | | -| Rel 23-01 | [5022289] | Latest Cumulative Update(LCU) | 5.77 | Jan 10, 2023 | -| Rel 23-01 | [5019958] | IE Cumulative Updates | 2.133, 3.120, 4.113 | Nov 8, 2022 | -| Rel 23-01 | [5022291] | Latest Cumulative Update(LCU) | 7.21 | Jan 10, 2023 | -| Rel 23-01 | [5022286] | Latest Cumulative Update(LCU) | 6.53 | Jan 10, 2023 | -| Rel 23-01 | [5020861] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.133 | Dec 13, 2022 | -| Rel 23-01 | [5020869] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.133 | Dec 13, 2022 | -| Rel 23-01 | [5020862] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.113 | Dec 13, 2022 | -| Rel 23-01 | [5020868] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.113 | Dec 13, 2022 | -| Rel 23-01 | [5020859] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.120 | Dec 13, 2022 | -| Rel 23-01 | [5020867] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.120 | Dec 13, 2022 | -| Rel 23-01 | [5020866] | .NET Framework 4.7.2 Cumulative Update LKG | 6.53 | Dec 13, 2022 | -| Rel 23-01 | [5020877] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.21 | Dec 13, 2022 | -| Rel 23-01 | [5022338] | Monthly Rollup | 2.133 | Jan 10, 2023 | -| Rel 23-01 | [5022348] | Monthly Rollup | 3.120 | Jan 10, 2023 | -| Rel 23-01 | [5022352] | Monthly Rollup | 4.113 | Jan 10, 2023 | -| Rel 23-01 | [5016263] | Servicing Stack update LKG | 3.120 | Jul 12, 2022 | -| Rel 23-01 | [5018922] | Servicing Stack update LKG | 4.113 | Oct 11, 2022 | -| Rel 23-01 | [4578013] | OOB Standalone Security Update | 4.113 | Aug 19, 2020 | -| Rel 23-01 | [5017396] | Servicing Stack update LKG | 5.77 | Sep 13, 2022 | -| Rel 23-01 | [5017397] | Servicing Stack update LKG | 2.133 | Sep 13, 2022 | -| Rel 23-01 | [4494175] | Microcode | 5.77 | Sep 1, 2020 | -| Rel 23-01 | [4494174] | Microcode | 6.53 | Sep 1, 2020 | ++| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | +| | | | | | +| Rel 23-01 | [5022289] | Latest Cumulative Update(LCU) | [5.77] | Jan 10, 2023 | +| Rel 23-01 | [5019958] | IE Cumulative Updates | [2.133], [3.120], [4.113] | Nov 8, 2022 | +| Rel 23-01 | [5022291] | Latest Cumulative Update(LCU) | [7.21] | Jan 10, 2023 | +| Rel 23-01 | [5022286] | Latest Cumulative Update(LCU) | [6.53] | Jan 10, 2023 | +| Rel 23-01 | [5020861] | .NET Framework 3.5 Security and Quality Rollup LKG | [2.133] | Dec 13, 2022 | +| Rel 23-01 | [5020869] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [2.133] | Dec 13, 2022 | +| Rel 23-01 | [5020862] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.113] | Dec 13, 2022 | +| Rel 23-01 | [5020868] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [4.113] | Dec 13, 2022 | +| Rel 23-01 | [5020859] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.120] | Dec 13, 2022 | +| Rel 23-01 | [5020867] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [3.120] | Dec 13, 2022 | +| Rel 23-01 | [5020866] | .NET Framework 4.7.2 Cumulative Update LKG | [6.53] | Dec 13, 2022 | +| Rel 23-01 | [5020877] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.21] | Dec 13, 2022 | +| Rel 23-01 | [5022338] | Monthly Rollup | [2.133] | Jan 10, 2023 | +| Rel 23-01 | [5022348] | Monthly Rollup | [3.120] | Jan 10, 2023 | +| Rel 23-01 | [5022352] | Monthly Rollup | [4.113] | Jan 10, 2023 | +| Rel 23-01 | [5016263] | Servicing Stack update LKG | [3.120] | Jul 12, 2022 | +| Rel 23-01 | [5018922] | Servicing Stack update LKG | [4.113] | Oct 11, 2022 | +| Rel 23-01 | [4578013] | OOB Standalone Security Update | [4.113] | Aug 19, 2020 | +| Rel 23-01 | [5017396] | Servicing Stack update LKG | [5.77] | Sep 13, 2022 | +| Rel 23-01 | [5017397] | Servicing Stack update LKG | [2.133] | Sep 13, 2022 | +| Rel 23-01 | [4494175] | Microcode | [5.77] | Sep 1, 2020 | +| Rel 23-01 | [4494174] | Microcode | [6.53] | Sep 1, 2020 | [5022289]: https://support.microsoft.com/kb/5022289 [5019958]: https://support.microsoft.com/kb/5019958 The following tables show the Microsoft Security Response Center (MSRC) updates [5017397]: https://support.microsoft.com/kb/5017397 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174+[2.133]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.120]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.113]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.77]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.53]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.21]: ./cloud-services-guestos-update-matrix.md#family-7-releases ## December 2022 Guest OS |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **January 31, 2023** +The January Guest OS has released. + ###### **January 19, 2023** The December Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.21_202301-011 | January 31, 2023 | Post 7.23 | | WA-GUEST-OS-7.20_202212-01 | January 19, 2023 | Post 7.22 |-| WA-GUEST-OS-7.19_202211-01 | December 12, 2022 | Post 7.21 | +|~~WA-GUEST-OS-7.19_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-7.18_202210-02~~| November 4, 2022 | January 19, 2023 | |~~WA-GUEST-OS-7.16_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-7.15_202208-01~~| September 2, 2022 | November 4, 2022 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.53_202301-01 | January 31, 2023 | Post 6.55 | | WA-GUEST-OS-6.52_202212-01 | January 19, 2023 | Post 6.54 |-| WA-GUEST-OS-6.51_202211-01 | December 12, 2022 | Post 6.53 | +|~~WA-GUEST-OS-6.51_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-6.50_202210-02~~| November 4, 2022 | January 19, 2023 | |~~WA-GUEST-OS-6.48_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-6.47_202208-01~~| September 2, 2022 | November 4, 2022 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.77_202301-01 | January 31, 2023 | Post 5.79 | | WA-GUEST-OS-5.76_202212-01 | January 19, 2023 | Post 5.78 | -| WA-GUEST-OS-5.75_202211-01 | December 12, 2022 | Post 5.77 | +|~~WA-GUEST-OS-5.75_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-5.74_202210-02~~| November 4, 2022 | January 19, 2023 | |~~WA-GUEST-OS-5.72_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-5.71_202208-01~~| September 2, 2022 | November 4, 2022 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.113_202301-01 | January 31, 2023 | Post 4.115 | | WA-GUEST-OS-4.112_202212-01 | January 19, 2023 | Post 4.114 |-| WA-GUEST-OS-4.111_202211-01 | December 12, 2022 | Post 4.113 | +|~~WA-GUEST-OS-4.111_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-4.110_202210-02~~| November 4, 2022 | January 19, 2023 | |~~WA-GUEST-OS-4.108_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-4.107_202208-01~~| September 2, 2022 | November 4, 2022 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.120_202301-01 | January 31, 2023 | Post 3.122 | | WA-GUEST-OS-3.119_202212-01 | January 19, 2023 | Post 3.121 |-| WA-GUEST-OS-3.118_202211-01 | December 12, 2022 | Post 3.120 | +|~~WA-GUEST-OS-3.118_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-3.117_202210-02~~| November 4, 2022 | January 19, 2023 | |~~WA-GUEST-OS-3.115_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-3.114_202208-01~~| September 2, 2022 | November 4, 2022 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.133_202301-01 | January 31, 2023 | Post 2.135 | | WA-GUEST-OS-2.132_202212-01 | January 19, 2023 | Post 2.134 |-| WA-GUEST-OS-2.131_202211-01 | December 12, 2022 | Post 2.133 | +|~~WA-GUEST-OS-2.131_202211-01~~| December 12, 2022 | January 31, 2023 | |~~WA-GUEST-OS-2.130_202210-02~~| November 4, 2022 | January 19, 2023 | |~~WA-GUEST-OS-2.128_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-2.127_202208-01~~| September 2, 2022 | November 4, 2022 | |
cognitive-services | Batch Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md | Here are examples that can result in the 400 error: - The `top` query parameter exceeded the limit of 100. - You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request. - You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".-- You tried to use a `F0` Speech resource, but the region only supports the `S0` (standard) Speech resource pricing tier. +- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. - You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". ### HTTP 404 error |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/overview.md | You can discover the following insights by using Bing Visual Search: | Insight | Description | |--|-| | Visually similar images | A list of images that are visually similar to the input image. |-| Visually similar products | Products that are visually similar to the product shown. | | Shopping sources | Places where you can buy the item shown in the input image. | | Related searches | Related searches made by others or that are based on the contents of the image. | | Webpages that include the image | Webpages that include the input image. | |
cognitive-services | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/disaster-recovery.md | + + Title: Disaster recovery with cross-region support ++description: This article provides instructions on how to use the cross-region feature to recover your Cognitive Service resources in the event of a network outage. +++++ Last updated : 01/27/2023++++# CrossΓÇôregion disaster recovery ++One of the first decisions every Cognitive Service customer makes is which region to create their resource in. The choice of region provides customers with the benefits of regional compliance by enforcing data residency requirements. Cognitive Services is available in [multiple geographies](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services) to ensure customers across the world are supported. ++It's rare, but possible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two resources in different regions and the ability to sync data between them. ++## Feature overview ++The cross-region disaster recovery feature, also known as Single Resource Multiple Region (SRMR), enables this scenario by allowing you to distribute traffic or copy custom models to multiple resources which can exist in any supported geography. ++## SRMR business scenarios ++* To ensure high availability of your application, each Cognitive Service supports a flexible recovery region option that allows you to choose from a list of supported regions. +* Customers with globally distributed end users can deploy resources in multiple regions to optimize the latency of their applications. ++## Routing profiles ++Azure Traffic Manager routes requests among the selected regions. The SRMR currently supports [Priority](/azure/traffic-manager/traffic-manager-routing-methods#priority-traffic-routing-method), [Performance](/azure/traffic-manager/traffic-manager-routing-methods#performance-traffic-routing-method) and [Weighted](/azure/traffic-manager/traffic-manager-routing-methods#weighted-traffic-routing-method) profiles and is currently available for the following ++* [Computer Vision](/azure/cognitive-services/computer-vision/overview) +* [Immersive Reader](/azure/applied-ai-services/immersive-reader/overview) +* [Univariate Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview) ++> [!NOTE] +> SRMR is not supported for multi-service resources or free tier resources. ++If you use Priority or Weighted traffic manager profiles, your configuration will behave according to the [Traffic Manager documentation](/azure/traffic-manager/traffic-manager-routing-methods). ++## Enable SRMR ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Navigate to your resource's page. +1. Under the **Resource Management** section on the left pane, select the Regions tab and choose a routing method. + :::image type="content" source="media/disaster-recovery/routing-method.png" alt-text="Screenshot of the routing method select menu in the Azure portal." lightbox="media/disaster-recovery/routing-method.png"::: +1. Select the **Add Region** link. +1. On the **Add Region** pop-up screen, set up additional regions for your resources. + :::image type="content" source="media/disaster-recovery/add-regions.png" alt-text="Screenshot of the Add Region popup in the Azure portal." lightbox="media/disaster-recovery/add-regions.png"::: +1. Save your changes. ++## See also +* [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md) +* [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md) +* [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md) +* [Create a new resource using an ARM template](create-account-resource-manager-template.md) |
cognitive-services | Health Entity Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/health-entity-categories.md | Text Analytics for health detects medical concepts that fall under the following **AGE** - All age terms and phrases, including ones for patients, family members, and others. For example, 40-year-old, 51 yo, 3 months old, adult, infant, elderly, young, minor, middle-aged. +**ETHNICITY** - Phrases that indicate the ethnicity of the subject. For example, African American or Asian. +++ **GENDER** - Terms that disclose the gender of the subject. For example, male, female, woman, gentleman, lady. :::image type="content" source="../media/entities/age-entity.png" alt-text="An example of an age entity." lightbox="../media/entities/age-entity.png"::: Text Analytics for health detects medical concepts that fall under the following :::image type="content" source="../media/entities/family-relation.png" alt-text="Example of a family relation entity." lightbox="../media/entities/family-relation.png"::: +**EMPLOYMENT** ΓÇô Mentions of employment status including specific profession, such as unemployed, retired, firefighter, student. +++**LIVING_STATUS** ΓÇô Mentions of the housing situation, including homeless, living with parents, living alone, living with others. +++**SUBSTANCE_USE** ΓÇô Mentions of use of legal or illegal drugs, tobacco or alcohol. For example, smoking, drinking, or heroin use. +++**SUBSTANCE_USE_AMOUNT** ΓÇô Mentions of specific amounts of substance use. For example, a pack (of cigarettes) or a few glasses (of wine). +++ ## Treatment ### Entities |
cognitive-services | Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md | +> [!TIP] +> If you want to test out the feature without writing any code, use the [Language Studio](../../language-studio.md). + There are two ways to call the service: * A [Docker container](use-containers.md) (synchronous) There are two ways to call the service: ## Specify the Text Analytics for health model -By default, Text Analytics for health will use the latest available AI model on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. +By default, Text Analytics for health will use the latest available AI model on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities is supported with the new preview model version "2023-01-01-preview". | Supported Versions | latest version | |--|--|+| `2023-01-01-preview` | `2023-01-01-preview` | | `2022-08-15-preview` | `2022-08-15-preview` | | `2022-03-01` | `2022-03-01` |-| `2021-05-15` | `2021-05-15` | + ### Text Analytics for health container Analysis is performed upon receipt of the request. If you send a request using t ## Submitting a Fast Healthcare Interoperability Resources (FHIR) request -To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body. +Fast Healthcare Interoperability Resources (FHIR) is the health industry communication standard developed by the Health Level Seven International (HL7) organization. The standard defines the data formats (resources) and API structure for exchanging electronic healthcare data. To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body. | Parameter Name | Type | Value | |--|--|--| |
cognitive-services | Completions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/completions.md | keywords: -# Learn how to generate or manipulate text, including code +# Learn how to generate or manipulate text The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability. A: Two, Phobos and Deimos. Q: ```- ## Working with code The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. -You can use Codex for a variety of tasks including: -- * Turn comments into code - * Complete your next line or function in context - * Bring knowledge to you, such as finding a useful library or API call for an application - * Add comments - * Rewrite code for efficiency ---### Codex examples --Here are a few examples of using Codex --**Saying "Hello" (Python)** --``` -""" -Ask the user for their name and say "Hello" -""" -``` --**Create random names (Python)** --``` -""" -1. Create a list of first names -2. Create a list of last names -3. Combine them randomly into a list of 100 full names -""" -``` --**Create a MySQL query (Python)** --``` -""" -Table customers, columns = [CustomerId, FirstName, LastName, Company, Address, City, State, Country, PostalCode, Phone, Fax, Email, SupportRepId] -Create a MySQL query for all customers in Texas named Jane -""" -query = -``` --**Explaining code (JavaScript)** --``` -// Function 1 -var fullNames = []; -for (var i = 0; i < 50; i++) { - fullNames.push(names[Math.floor(Math.random() * names.length)] - + " " + lastNames[Math.floor(Math.random() * lastNames.length)]); -} --// What does Function 1 do? -``` --### Best practices --**Start with a comment, data or code.** To get Codex to create a useful completion it's helpful to think about what information a programmer would need to perform a task. This could just be a clear comment or the data needed to write a useful function, like the names of variables or what class a function handles. --``` -# Create a function called 'nameImporter' to add a first and last name to the database -``` --In this example we tell Codex what to call the function and what task it's going to perform. --This approach scales even to the point where you can provide Codex with a comment and an example of a database schema to get it to write useful query requests for various databases. --``` -# Table albums, columns = [AlbumId, Title, ArtistId] -# Table artists, columns = [ArtistId, Name] -# Table media_types, columns = [MediaTypeId, Name] -# Table playlists, columns = [PlaylistId, Name] -# Table playlist_track, columns = [PlaylistId, TrackId] -# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice] --# Create a query for all albums by Adele -``` --When you show Codex the database schema it's able to make an informed guess about how to format a query. --**Specify the language.** Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. --``` -# R language -# Calculate the mean distance between an array of points -``` --``` -# Python 3 -# Calculate the mean distance between an array of points -``` --*Prompt Codex with what you want it to do.* If you want Codex to create a webpage, placing the first line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with `func` or `def`). --``` -<!-- Create a web page with the title 'Kat Katman attorney at paw' --> -<!DOCTYPE html> -``` --Placing `<!DOCTYPE html>` after our comment makes it very clear to Codex what we want it to do. --``` -# Create a function to count to 100 --def counter -``` --If we start writing the function Codex will understand what it needs to do next. --**Specifying libraries will help Codex understand what you want.** Codex is aware of a large number of libraries, APIs and modules. By telling Codex which ones to use, either from a comment or importing them into your code, Codex will make suggestions based upon them instead of alternatives. --``` -<!-- Use A-Frame version 1.2.0 to create a 3D website --> -<!-- https://aframe.io/releases/1.2.0/aframe.min.js --> -``` --By specifying the version, you can make sure Codex uses the most current library. --> [!NOTE] -> Codex can suggest helpful libraries and APIs, but always be sure to do your own research to make sure that they're safe for your application. --**Comment style can affect code quality.** With some languages, the style of comments can improve the quality of the output. For example, when working with Python, in some cases using doc strings (comments wrapped in triple quotes) can give higher quality results than using the pound (#) symbol. --``` -""" -Create an array of users and email addresses -""" -``` --**Put comments inside of functions can be helpful.** Recommended coding standards suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do. --``` -def getUserBalance(id): - """ - Look up the user in the database ΓÇÿUserData' and return their current account balance. - """ -``` --**Provide examples for more precise results.** If you have a particular style or format you need Codex to use, providing examples or demonstrating it in the first part of the request will help Codex more accurately match what you need. --``` -""" -Create a list of random animals and species -""" -animals = [ {"name": "Chomper", "species": "Hamster"}, {"name": -``` --**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses. --In cases where you need Codex to provide different potential results, start at zero and then increment upwards by .1 until you find suitable variation. --**Organize tasks into functions.** We can get Codex to write functions by specifying what the function should do in as precise terms as possible in comment. By writing the following comment, Codex creates a JavaScript timer function that's triggered when a user presses a button: --A simple JavaScript timer --``` -// Create a timer that creates an alert in 10 seconds -``` --We can use Codex to perform common tasks with well known libraries like creating a customer with the Stripe API: --Create a Stripe customer in Python --``` -# Create a Stripe customer from an email address -``` --**Creating example data.** Testing applications often requires using example data. Because Codex is also a language model that understands how to comprehend and write natural language, you can ask Codex to create data like arrays of made up names, products and other variables. --``` -/* Create an array of weather temperatures for San Francisco */ -``` --Asking Codex to perform this task will produce a table like this: --``` -var weather = [ - { month: 'January', high: 58, low: 48 }, - { month: 'February', high: 61, low: 50 }, - { month: 'March', high: 64, low: 53 }, - { month: 'April', high: 67, low: 55 }, - { month: 'May', high: 70, low: 58 }, - { month: 'June', high: 73, low: 61 }, - { month: 'July', high: 76, low: 63 }, - { month: 'August', high: 77, low: 64 }, - { month: 'September', high: 76, low: 63 }, - { month: 'October', high: 73, low: 61 }, - { month: 'November', high: 68, low: 57 }, - { month: 'December', high: 64, low: 54 } -]; -``` --**Compound functions and small applications.** We can provide Codex with a comment consisting of a complex request like creating a random name generator or performing tasks with user input and Codex can generate the rest provided there are enough tokens. --``` -/* -Create a list animals -Create a list of cities -Use the lists to generate stories about what I saw at the zoo in each city -*/ -``` --**Use Codex to explain code.** Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex typically interprets this comment as the start of an explanation and completes the rest of the text. --``` -/* Explain what the previous function is doing: It -``` --**Explaining an SQL query.** In this example we use Codex to explain in a human readable format what an SQL query is doing. --``` -SELECT DISTINCT department.name -FROM department -JOIN employee ON department.id = employee.department_id -JOIN salary_payments ON employee.id = salary_payments.employee_id -WHERE salary_payments.date BETWEEN '2020-06-01' AND '2020-06-30' -GROUP BY department.name -HAVING COUNT(employee.id) > 10; Explanation of the above query in human readable format-``` --**Writing unit tests.** Creating a unit test can be accomplished in Python simply by adding the comment "Unit test" and starting a function. --``` -# Python 3 -def sum_numbers(a, b): - return a + b --# Unit test -def -``` --**Checking code for errors.** By using examples, you can show Codex how to identify errors in code. In some cases no examples are required, however demonstrating the level and detail to provide a description can help Codex understand what to look for and how to explain it. (A check by Codex for errors shouldn't replace careful review by the user. ) --``` -/* Explain why the previous function doesn't work. */ -``` --**Using source data to write database functions.** Just as a human programmer would benefit from understanding the database structure and the column names, Codex can use this data to help you write accurate query requests. In this example we insert the schema for a database and tell Codex what to query the database for. --``` -# Table albums, columns = [AlbumId, Title, ArtistId] -# Table artists, columns = [ArtistId, Name] -# Table media_types, columns = [MediaTypeId, Name] -# Table playlists, columns = [PlaylistId, Name] -# Table playlist_track, columns = [PlaylistId, TrackId] -# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice] --# Create a query for all albums by Adele -``` --**Converting between languages.** You can get Codex to convert from one language to another by following a simple format where you list the language of the code you want to convert in a comment, followed by the code and then a comment with the language you want it translated into. --``` -# Convert this from Python to R -# Python version --[ Python code ] --# End --# R version -``` --**Rewriting code for a library or framework.** If you want Codex to make a function more efficient, you can provide it with the code to rewrite followed by an instruction on what format to use. --``` -// Rewrite this as a React component -var input = document.createElement('input'); -input.setAttribute('type', 'text'); -document.body.appendChild(input); -var button = document.createElement('button'); -button.innerHTML = 'Say Hello'; -document.body.appendChild(button); -button.onclick = function() { - var name = input.value; - var hello = document.createElement('div'); - hello.innerHTML = 'Hello ' + name; - document.body.appendChild(hello); -}; --// React version: -``` +Learn more about generating code completions, with the [working with code guide](./work-with-code.md) ## Next steps +Learn [how to work with code (Codex)](./work-with-code.md). Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md). |
communication-services | Custom Teams Endpoint Authentication Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md | The Fabrikam company has built a custom, Teams calling application for internal The following sequence diagram details single-tenant authentication. Before we begin: - Alice or her Azure AD administrator needs to give the custom Teams application consent, prior to the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md). - The Azure Communication Services resource admin needs to grant Alice permission to perform her role. Learn more about [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md). Steps:-1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md). -1. Get an access token for Alice: The application for Teams users performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are expected to be passed along with the artifact 'A1' for validation that the Azure AD Token was issued to the expected user and application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id). +1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md). +1. Get an access token for Alice: The Fabrikam application by using a custom authentication artifact with value 'B' performs authorization logic to decide whether Alice has permission to exchange the Azure AD access token for an Azure Communication Services access token. After successful authorization, the Fabrikam application performs control plane logic, using artifacts 'A1', 'A2', and 'A3'. Azure Communication Services access token 'D' is generated for Alice within the Fabrikam application. This access token can be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are passed along with the artifact 'A1' for validation. The validation assures that the Azure AD Token was issued to the expected user. The application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id). 1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Artifacts: Artifacts: - Artifact A2 - Type: Object ID of an Azure AD user - Source: Fabrikam's Azure AD tenant+ - Authority: `https://login.microsoftonline.com/<tenant>/` - Artifact A3 - Type: Azure AD application ID - Source: Fabrikam's Azure AD tenant+- Artifact B + - Type: Custom Fabrikam authorization artifact (issued either by Azure AD or a different authorization service) +- Artifact C + - Type: Azure Communication Services resource authorization artifact. + - Source: "Authorization" HTTP header with either a bearer token for [Azure AD authentication](../authentication.md#azure-ad-authentication) or a Hash-based Message Authentication Code (HMAC) payload and a signature for [access key-based authentication](../authentication.md#access-key). - Artifact D - Type: Azure Communication Services access token - Audience: _`Azure Communication Services`_ ΓÇö data plane Before we begin: - Alice or her Azure AD administrator needs to give Contoso's Azure Active Directory application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md). Steps:-1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. Make sure you configure MSAL with a correct [authority](../../../active-directory/develop/msal-client-application-configuration.md#authority). If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md). -1. Get an access token for Alice: The Contoso application by using a custom authentication artifact with value 'B' performs authorization logic to decide whether Alice has permission to exchange the Azure AD access token for an Azure Communication Services access token. After successful authorization, the Contoso application performs control plane logic, using artifacts 'A1', 'A2', and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are expected to be passed along with the artifact 'A1' for validation that the Azure AD Token was issued to the expected user and application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id). +1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. Make sure you configure MSAL with a correct [authority](../../../active-directory/develop/msal-client-application-configuration.md#authority). If authentication is successful, the Contoso client application receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md). +1. Get an access token for Alice: The Contoso application by using a custom authentication artifact with value 'B' performs authorization logic to decide whether Alice has permission to exchange the Azure AD access token for an Azure Communication Services access token. After successful authorization, the Contoso application performs control plane logic, using artifacts 'A1', 'A2', and 'A3'. An Azure Communication Services access token 'D' is generated for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are passed along with the artifact 'A1'. The validation assures that the Azure AD Token was issued to the expected user. The application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id). 1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's application. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Artifacts: - Artifact B - Type: Custom Contoso authorization artifact (issued either by Azure AD or a different authorization service) - Artifact C- - Type: Hash-based Message Authentication Code (HMAC) (based on Contoso's _`connection string`_) + - Type: Azure Communication Services resource authorization artifact. + - Source: "Authorization" HTTP header with either a bearer token for [Azure AD authentication](../authentication.md#azure-ad-authentication) or a Hash-based Message Authentication Code (HMAC) payload and a signature for [access key-based authentication](../authentication.md#access-key) - Artifact D - Type: Azure Communication Services access token - Audience: _`Azure Communication Services`_ ΓÇö data plane The following sample apps may be interesting to you: - To see how the Azure Communication Services access tokens for Teams users are acquired in a single-page application, check out a [SPA sample app](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-spa). -- To learn more about a server implementation of an authentication service for Azure Communication Services check out the [Authentication service hero sample](../../samples/trusted-auth-sample.md).+- To learn more about a server implementation of an authentication service for Azure Communication Services, check out the [Authentication service hero sample](../../samples/trusted-auth-sample.md). |
communication-services | Video Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md | -> This library cannot be used standalone and can only work when used with the Azure Communication Calling client library for WebJS (https://www.npmjs.com/package/@azure/communication-calling). +> This library cannot be used standalone and can only work when used with the Azure Communication Calling client library for WebJS (https://www.npmjs.com/package/@azure/communication-calling). ++> [!NOTE] +> Currently browser support for creating video background effects is only supported on Chrome and Edge Desktop Browser (Windows and Mac) and Mac Safari Desktop. The Azure Communication Calling SDK allows you to create video effects that other users on a call will be able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When background blur enabled a user can feel more comfortable in doing a video call that the output video will just show a user and all other content will be blurred. Currently the video effects support the following ability: - Background blur - Replace the background with a custom image -## Browser support: --Currently creating video effects is only supported on Chrome and Edge Desktop Browser and Mac Safari Desktop. - ## Class model: | Name | Description | if (backgroundReplacementSupported) { await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); } -You can change the image used for this effect by passing it in the a new configure method: +//You can change the image used for this effect by passing it in the a new configure method: const newBackgroundImage = 'https://linkToNewImageFile'; await backgroundReplacementEffect.configure({ await backgroundReplacementEffect.configure({ }); -You can switch the effects using the same method on the video effects feature api: +//You can switch the effects using the same method on the video effects feature api: // Switch to background blur await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); |
communications-gateway | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md | + + Title: Deploy Azure Communications Gateway +description: This article guides you through how to deploy an Azure Communications Gateway. ++++ Last updated : 12/16/2022+++# Deploy Azure Communications Gateway ++This article will guide you through creating an Azure Communications Gateway resource in Azure. You must configure this resource before you can deploy Azure Communications Gateway. ++## Prerequisites ++Carry out the steps detailed in [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md). ++## 1. Start creating the Azure Communications Gateway resource ++In this step, you'll create the Azure Communications Gateway resource. ++1. Sign in to the [Azure portal](https://azure.microsoft.com/). +1. In the search bar at the top of the page, search for Communications Gateway and select **Communications Gateways**. ++ :::image type="content" source="media/deploy/search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for Azure Communications Gateway."::: ++1. Select the **Create** option. ++ :::image type="content" source="media/deploy/create.png" alt-text="Screenshot of the Azure portal. Shows the existing Azure Communications Gateway. A Create button allows you to create more Azure Communications Gateways."::: ++1. Use the information you collected in [Collect Azure Communications Gateway resource values](prepare-to-deploy.md#6-collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration section and then select **Next: Service Regions**. ++ :::image type="content" source="media/deploy/basics.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing the Basics section."::: ++1. Use the information you collected in [Collect Service Regions configuration values](prepare-to-deploy.md#7-collect-service-regions-configuration-values) to fill out the fields in the **Service Regions** section and then select **Next: Tags**. +1. (Optional) Configure tags for your Azure Communications Gateway resource: enter a **Name** and **Value** for each tag you want to create. +1. Select **Review + create**. ++If you've entered your configuration correctly, you'll see a **Validation Passed** message at the top of your screen. Navigate to the **Review + create** section. ++If you haven't filled in the configuration correctly, you'll see an error message in the configuration section(s) containing the invalid configuration. Correct the invalid configuration by selecting the flagged section(s) and use the information within the error messages to correct invalid configuration before returning to the **Review + create** section. +++## 2. Submit your Azure Communications Gateway configuration ++Check your configuration and ensure it matches your requirements. If the configuration is correct, select **Create**. ++You now need to wait for your resource to be provisioned and connected to the Teams environment. Upon completion your onboarding team will reach out to you and the Provisioning Status filed on the resource overview will show as "Complete". We recommend you check in periodically to see if your resource has been provisioned. This process can take up to two weeks as updating ACLs in the Azure and Teams environments is done on a periodic basis. ++Once your resource has been provisioned, a message will appear saying **Your deployment is complete**. Select **Go to resource group**, and then check that your resource group contains the correct Azure Communications Gateway resource. +++## 3. Complete the JSON onboarding file ++Your onboarding team will require additional information to complete your Operator Connect onboarding. If you're being onboarded to Operator Connect/Teams Phone Mobile by Microsoft, the onboarding team will reach out to you. +Wait for your onboarding team to confirm that the process is complete before testing your portal access. ++## 4. Test your portal access ++Navigate to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and ensure you're able to sign in. ++## 5. Register your Fully Qualified Domain Name (FQDN) ++Your Azure Communications Gateway will require a custom domain name inside your Active Directory tenant. Follow this step to set up the custom domain name that Teams will use to recognize an Azure Communications Gateway that belongs to you. ++1. Navigate to your Azure Communications Gateway resource and select **Properties**. You'll see a field named **Domain name**. This name is your custom domain name. +1. Complete the following procedure: [Add your custom domain name to Azure AD](/azure/active-directory/fundamentals/add-custom-domain). +1. Share your DNS TXT record information with your onboarding team. Wait for your onboarding team to confirm that the DNS TXT record has been configured correctly. +1. Complete the following procedure: [Verify your custom domain name](/azure/active-directory/fundamentals/add-custom-domain). ++## Next steps ++- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md) |
communications-gateway | Emergency Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling.md | + + Title: Emergency Calling with Azure Communications Gateway +description: Understand Azure Communications Gateway's support for emergency calling ++++ Last updated : 01/09/2023++++# Emergency calling with Azure Communications Gateway ++Azure Communications Gateway supports Operator Connect and Teams Phone Mobile subscribers making emergency calls from Microsoft Teams clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you'll need to consider. ++## Overview of emergency calling with Azure Communications Gateway ++If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body. ++Unless you choose to route emergency calls directly to an Emergency Routing Service Provider (US only), Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP). For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-operator-connect). ++Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from several sources, all supported by Azure Communications Gateway: ++- [Dynamic locations](/microsoftteams/configure-dynamic-emergency-calling), based on the location of the client used to make the call. + - Enterprise administrators must add physical locations associated with network connectivity into the Location Information Server (LIS) in Microsoft Teams. + - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location. +- Static locations that you assign to numbers. + - The Operator Connect API allows you to associate numbers with locations that enterprise administrators have already configured in the Microsoft Teams Admin Center as part of uploading numbers. + - Azure Communications Gateway's API Bridge Number Management Portal also allows you to associate numbers with locations during upload. You can also manage the locations associated with numbers after the numbers have been uploaded. +- Static locations that your enterprise customers assign. When you upload numbers, you can choose whether enterprise administrators can modify the location information associated with each number. ++> [!NOTE] +> If you are taking responsibility for assigning static locations to numbers, note that enterprise administrators must have created the locations within the Microsoft Teams Admin Center first. ++Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy the Azure Communications Gateway resource](deploy.md). These strings will also be used by Microsoft Teams to identify emergency calls. ++## Emergency calling in the United States ++Within the United States, Microsoft Teams supports the Emergency Routing Service Providers (ERSPs) listed in the ["911 service providers" section of the list of Session Border Controllers certified for Direct Routing)](/microsoftteams/direct-routing-border-controllers). Azure Communications Gateway has been certified to interoperate with these ERSPs. ++You must route emergency calls to one of these ERSPs. If your network doesn't support PIDF-LO SIP bodies, Azure Communications Gateway can route emergency calls directly to your chosen ERSP. You must arrange this routing with your onboarding team. ++## Emergency calling with Teams Phone Mobile ++For Teams Phone Mobile subscribers, Azure Communications Gateway routes emergency calls from Microsoft Teams clients to your network in the same way as other originating calls. The call includes location information in accordance with the [emergency call considerations for Teams Phone Mobile](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-teams-phone-mobile). ++Your network must not route emergency calls from native dialers to Azure Communications Gateway or Microsoft Teams. ++## Next steps ++- Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing). +- Learn about [dynamic emergency calling in Microsoft Teams](/microsoftteams/configure-dynamic-emergency-calling). |
communications-gateway | Interoperability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability.md | + + Title: Interoperability of Azure Communications Gateway with Microsoft Teams +description: Understand how Azure Communications Gateway fits into your existing fixed and mobile networks and into Microsoft Teams ++++ Last updated : 12/07/2022++++# Interoperability of Azure Communications Gateway with Microsoft Teams ++Azure Communications Gateway sits at the edge of your network. This position allows it to manipulate signaling and media to meet the requirements of your networks and the Microsoft Phone System. Azure Communications Gateway includes many interoperability settings by default, and you can arrange further interoperability configuration. ++## Role and position in the network ++Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to the Microsoft Phone System, allowing you to support Operator Connect (for fixed line networks) and Teams Phone Mobile (for mobile networks). The following diagram shows where Azure Communications Gateway sits in your network. ++ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. The mobile network also contains an application server for anchoring calls in the Microsoft Phone System. ++Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients. ++Azure Communications Gateway provides all the features of a traditional session border controller (SBC). These features include: ++- Signaling interworking features to solve interoperability problems +- Advanced media manipulation and interworking +- Defending against Denial of Service attacks and other malicious traffic +- Ensuring Quality of Service ++Azure Communications Gateway also offers dashboards that you can use to monitor key metrics of your deployment. ++You must provide the networking connection between Azure Communications Gateway and your core networks. For Teams Phone Mobile, you must also provide a network element that can route calls into the Microsoft Phone System for call anchoring. ++### Compliance with Certified SBC specifications ++Azure Communications Gateway supports the Microsoft specifications for Certified SBCs for Operator Connect and Teams Phone Mobile. For more information about certification and these specifications, see [Session Border Controllers certified for Direct Routing](/microsoftteams/direct-routing-border-controllers) and + the Operator Connect or Teams Phone Mobile documentation provided by your Microsoft representative. ++### Call control integration for Teams Phone Mobile +[Teams Phone Mobile](/microsoftteams/operator-connect-mobile-plan) allows you to offer Microsoft Teams call services for calls made from the native dialer on mobile handsets, for example presence and call history. These features require anchoring the calls in Microsoft's Intelligent Conversation and Communications Cloud (IC3), part of the Microsoft Phone System. ++The Microsoft Phone System relies on information in SIP signaling to determine whether a call is: ++- To a Teams Mobile Phone subscriber. +- From a Teams Mobile Phone subscriber or between two Teams Phone Mobile subscribers. ++Your core mobile network must supply this information to Azure Communications Gateway, by using unique trunks or by correctly populating an `X-MS-FMC` header as defined by the Teams Phone Mobile SIP specifications. ++Your core mobile network must also be able to anchor and divert calls into the Microsoft Phone System. You can choose from the following options. ++- Deploying Metaswitch Mobile Control Point (MCP). MCP is an IMS Application Server that queries the Teams Phone Mobile Consultation API to determine whether the call involves a Teams Phone Mobile Subscriber. MCP then adds X-MS-FMC headers and updates the signaling to divert the call into the Microsoft Phone System through Azure Communications Gateway. +- Using other routing capabilities in your core network to detect Teams Phone Mobile subscribers and route INVITEs to or from these subscribers into the Microsoft Phone System through Azure Communications Gateway. ++> [!IMPORTANT] +> If an INVITE has an X-MS-FMC header, the core must not route the call to Microsoft Teams. The call has already been anchored in the Microsoft Phone System. ++## SIP signaling ++Azure Communications Gateway includes SIP trunks to your own network and can interwork between your existing core networks and the requirements of the Microsoft Phone System. For example, Azure Communications Gateway automatically interworks calls to support the following requirements from Operator Connect and Teams Phone Mobile: ++- SIP over TLS +- X-MS-SBC header (describing the SBC function) +- Strict rules on a= attribute lines in SDP bodies +- Strict rules on call transfer handling ++SIP trunks between your network and Azure Communications Gateway are multi-tenant, meaning that traffic from all your customers share the same trunk. By default, traffic sent from the Azure Communications Gateway contains an X-MSTenantID header which uniquely identifies from which enterprise the traffic is originating and can be used by your billing systems. ++You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for: ++- Advanced SIP header or SDP message manipulation +- Support for reliable provisional messages (100rel) +- Interworking between early and late media +- Interworking away from inband DTMF tones +- Placing the unique tenant ID elsewhere in SIP messages to make it easier for your network to consume, for example in tgrp parameters ++## RTP and SRTP media ++The Microsoft Phone System typically requires SRTP for media. Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers further media manipulation features to allow your networks to interoperate with the Microsoft Phone System. ++### Media handling for calls ++You must select the codecs that you want to support when you deploy Azure Communications Gateway. If the Microsoft Phone System doesn't support these codecs, Azure Communications Gateway can perform transcoding (converting between codecs) on your behalf. ++Operator Connect and Teams Phone Mobile require core networks to support ringback tones (ringing tones) during call transfer. Core networks must also support comfort noise. If your core networks can't meet these requirements, Azure Communications Gateway can inject media into calls. ++### Media interworking options ++Azure Communications Gateway offers multiple media interworking options. For example, you might need to: ++- Change handling of RTCP +- Control bandwidth allocation +- Prioritize specific media traffic for Quality of Service ++For full details of the media interworking features available in Azure Communications Gateway, raise a support request. ++## Compatibility with monitoring requirements ++The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics required to be monitored by Operators as part of the Operator Connect program and include: ++- Call quality +- Call errors and unusual behavior (for example, call setup failures, short calls, or unusual disconnections) +- Other errors in Azure Communications Gateway ++We'll investigate the potential fault, and determine whether the fault relates to Azure Communications Gateway or the Microsoft Phone System. We may require you to carry out some troubleshooting steps in your networks to help isolate the fault. ++Azure Communications Gateway provides metrics that you can use to monitor the overall health of your Azure Communications Gateway deployment. If you notice any concerning metrics, you can raise an Azure Communications Gateway support ticket. ++## Next steps ++- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md). +- Learn about [requesting changes to Azure Communications Gateway](request-changes.md). |
communications-gateway | Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/limits.md | + + Title: Azure Communications Gateway limits, quotas and restrictions +description: Understand the limits and quotas associated with the Azure Communications Gateway ++++ Last updated : 01/11/2023 +++# Azure Communications Gateway limits, quotas and restrictions ++This article contains the usage limits and quotas that apply to Azure Communications Gateway. If you're looking for the full set of Microsoft Azure service limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). ++## General restrictions +++## SIP message restrictions ++Azure Communications Gateway applies restrictions to individual fields in SIP messages. These restrictions are applied for: ++* Performance - Having to process oversize messages elements decreases system performance. +* Resilience - Some oversize message elements are commonly used in denial of service attacks to consume resources. +* Security - Some network devices may fail to process messages that exceed this limit. ++### SIP size limits +++### SIP behavior restrictions +++## Next steps ++Some default limits and quotas can be increased. To request a change to a limit, raise a [change request](request-changes.md) stating the limit you want to change. |
communications-gateway | Monitor Azure Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md | + + Title: Monitoring your Azure Communications Gateway +description: Start here to learn how to monitor Azure Communications Gateway. +++++ Last updated : 01/25/2023+++# Monitoring Azure Communications Gateway ++When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Communications Gateway and how you can use the features of Azure Monitor to analyze and alert on this data. ++This article describes the monitoring data generated by Azure Communications Gateway. Azure Communications Gateway uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). +++## What is Azure Monitor? ++Azure Communications Gateway creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises. ++Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts: ++- What is Azure Monitor? +- Costs associated with monitoring +- Monitoring data collected in Azure +- Configuring data collection +- Standard tools in Azure for analyzing and alerting on monitoring data ++The following sections build on this article by describing the specific data gathered for Azure Communications Gateway. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. ++> [!TIP] +> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md). +++## Monitoring data ++Azure Communications Gateway collects metrics. See [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md) for detailed information on the metrics created by Azure Communications Gateway. Azure Communications Gateway doesn't collect logs. ++ For clarification on the different types of metrics available in Azure Monitor, see [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources). ++## Analyzing metrics ++You can analyze metrics for Azure Communications Gateway, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. ++For a list of the metrics collected, see [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md). ++For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). ++## Filtering and splitting ++All Azure Communications Gateway metrics support the **Region** dimension, allowing you to filter any metric by the Service Locations defined in your Azure Communications Gateway resource. ++You can also split a metric by the **Region** dimension to visualize how different segments of the metric compare with each other. ++For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md). ++## Alerts ++Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. ++Azure Communications Gateway doesn't currently support alerts. ++## Next steps ++- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
communications-gateway | Monitoring Azure Communications Gateway Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitoring-azure-communications-gateway-data-reference.md | + + Title: Monitoring Azure Communications Gateway data reference +description: Important reference material needed when you monitor Azure Communications Gateway +++++ Last updated : 01/25/2023++++# Monitoring Azure Communications Gateway data reference ++Learn about the data and resources collected by Azure Monitor from your Azure Communications Gateway workspace. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md) for details on collecting and analyzing monitoring data for Azure Communications Gateway. ++## Metrics ++This section lists all the automatically collected metrics collected for Azure Communications Gateway. The resource provider for these metrics is `Microsoft.VoiceServices/communicationsGateways`. ++### Error metrics ++| Metric | Unit | Description | +|:-|:-|:| +|Active Call Failures | Percentage| Percentage of active calls that fail. This metric includes, for example, calls where the media is dropped and calls that are torn down unexpectedly.| +++### Traffic metrics ++| Metric | Unit | Description | +|:-|:-|:| +| Active Calls | Count | Count of the total number of active calls. | +| Active Emergency Calls | Count | Count of the total number of active emergency calls.| ++For more information, see a list of [all metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported). ++## Metric Dimensions ++For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics). ++Azure Communications Gateway has the following dimensions associated with its metrics. ++| Dimension Name | Description | +| - | -- | +| **Region** | The Service Locations defined in your Azure Communications Gateway resource. | +++## See Also +- See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md) for a description of monitoring Azure Communications Gateway. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources. |
communications-gateway | Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/onboarding.md | + + Title: Onboarding to Microsoft Teams Phone with Azure Communications Gateway +description: Understand the Azure Communications Gateway Basic Integration Included Benefit for onboarding to Operator Connect and your other options for onboarding ++++ Last updated : 01/18/2023++++# Onboarding to Microsoft Teams with Azure Communications Gateway ++To launch Operator Connect and/or Teams Phone Mobile, you'll need an onboarding partner. Launching requires changes to the Operator Connect or Teams Phone Mobile environments and your onboarding partner manages the integration process and coordinates with Microsoft Teams on your behalf. They can also help you design and set up your network for success. ++If you're launching Operator Connect, Azure Communications Gateway includes an off-the-shelf onboarding service called the Basic Integration Included Benefit. It's suitable for simple Operator Connect use cases. ++If you're launching Teams Phone Mobile, you're not eligible for the Basic Integration Included Benefit. See [Alternatives to the Basic Integration Included Benefit](#alternatives-to-the-basic-integration-included-benefit). ++## Onboarding with the Basic Integration Included Benefit for Operator Connect ++The Basic Integration Included Benefit (BIIB) helps you to onboard customers to your Microsoft Teams Operator Connect offering as quickly as possible. You'll need to meet the [eligibility requirements](#eligibility-for-the-basic-integration-included-benefit). ++If you're eligible, we'll assign the following people as your onboarding team. ++- A remote **Project Manager** as a single point of contact. The Project Manager is responsible for communicating the schedule and keeping you up to date with your onboarding status. +- Microsoft **Delivery Consultants** and other technical personnel, led by a **Technical Delivery Manager**. These people guide and support you through the onboarding process for Microsoft Teams Operator Connect. The process includes providing and certifying the Operator Connect SBC functionality and launching your Operator Connect service in the Teams Admin Center. ++### Eligibility for the Basic Integration Included Benefit ++To be eligible for the BIIB, you must first deploy an Azure Communications Gateway resource. In addition: ++- You must be launching Microsoft Teams Operator Connect for fixed-line calls (not Teams Phone Mobile). +- Your network must be capable of meeting the [reliability requirements for Azure Communications Gateway](reliability-communications-gateway.md). +- You must not have more than two Azure service regions (the regions containing the voice and API infrastructure for traffic). +- You must not require any interworking options that aren't listed in the [interoperability description](interoperability.md). +- You must not require any API customization as part of the API Bridge feature (if you choose to deploy the API Bridge). ++If you don't meet these requirements, see [Alternatives to the Basic Integration Included Benefit](#alternatives-to-the-basic-integration-included-benefit). ++If we (Microsoft) determine at our sole discretion that your integration needs are unusually complex, we might: ++- Decline to provide the BIIB. +- Stop providing the BIIB, even if we've already started providing it. ++This limitation applies even if you're otherwise eligible. ++We might also stop providing the BIIB if you don't meet [your obligations with the Basic Integration Included Benefit](#your-obligations-with-the-basic-integration-included-benefit), including making timely responses to questions and fulfilling dependencies. ++### Phases of the Basic Integration Included Benefit ++When you've deployed your Azure Communications Gateway resource, your onboarding team will help you to ensure that Azure Communications Gateway and your network are properly configured for Operator Connect. Your onboarding team will then help you through the Operator Connect onboarding process, so that your service is launched in the Teams Admin Center. ++The BIIB has three phases. During these phases, you'll be responsible for some steps. See [Your obligations with the Basic Integration Included Benefit](#your-obligations-with-the-basic-integration-included-benefit). ++#### Phase 1: gathering information ++We'll share the Teams Operator Connect specification documents (for example, for network connectivity) if you don't already have access to them. We'll also provide an Operator Connect onboarding form and a proposed test plan. When you've given us the information listed in the onboarding form, your onboarding team will work with you to create a project timeline describing your path to launching in the Teams Admin Center. ++#### Phase 2: preparing Azure Communications Gateway and your networks ++We'll use the information you provided with the onboarding form to set up Azure Communications Gateway. We'll also provide guidance on preparing your own environment for Azure Communications Gateway. ++#### Phase 3: preparing for live traffic ++Your onboarding team will work through the steps described in [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md) with you. As part of these steps, we'll: ++ - Work through the test plan we agreed, with your help. + - Provide training on the Azure Communications Gateway resource for you and your support staff. + - Help you to prepare for launch. ++### Your obligations with the Basic Integration Included Benefit ++You're responsible for: ++- Arranging Microsoft Azure Peering Service (MAPS) connectivity. If you haven't finished rolling out MAPS yet, you must have started the roll-out and have a known delivery date. +- Signing the Operator Connect agreement. +- Providing someone as a single point-of-contact to assist us in collecting information and coordinating your resources. This person must have the authority to review and approve deliverables, and otherwise ensure that these responsibilities are carried out. +- Completing the onboarding form after we've supplied it. +- Providing test numbers and working with your onboarding team to run the test plan, including testing from your network to find call flow integration issues. +- Providing timely responses to questions, issues and dependencies to ensure the project finishes on time. +- Configuring your Operator Connect and Azure environments as described in [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md), [Deploy Azure Communications Gateway](deploy.md) and [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md). +- Ensuring that your network is compliant with the Microsoft Teams _Network Connectivity Specification_ and _Operational Excellence Specification_, and any other specifications provided by Microsoft Teams. +- Ensuring that your network engineers watch the training that your onboarding team provides. ++## Alternatives to the Basic Integration Included Benefit ++If you're not eligible for the Basic Integration Included Benefit (because you're deploying Teams Phone Mobile or you don't meet the [eligibility requirements](#eligibility-for-the-basic-integration-included-benefit)), you must arrange onboarding separately. You can: ++- Contact your Microsoft sales representative to arrange onboarding through Microsoft. +- Find your own onboarding partner. ++## Next steps ++- [Review the reliability requirements for Azure Communications Gateway](reliability-communications-gateway.md). +- [Review the interoperability function of Azure Communications Gateway](interoperability.md). +- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md). |
communications-gateway | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md | + + Title: What is Azure Communications Gateway? +description: Azure Communications Gateway provides telecoms operators with the capabilities and network functions required to connect their network to Microsoft Teams through the Operator Connect program. ++++ Last updated : 12/14/2022++++# What is Azure Communications Gateway? ++Azure Communications Gateway enables Microsoft Teams calling through the Operator Connect and Teams Phone Mobile programs for your telecommunications network. Azure Communications Gateway is certified as part of the Operator Connect Accelerator program. It provides Voice and IT integration with Microsoft Teams across both fixed and mobile networks. ++> [!IMPORTANT] +> You must sign an Operator Connect or Teams Phone Mobile agreement with Microsoft to use this service. ++ Diagram that shows how Azure Communications Gateway connects to the Microsoft Phone System and to your fixed and mobile networks. Microsoft Teams clients connect to the Microsoft Phone system. Your fixed network connects to PSTN endpoints. Your mobile network connects to Teams Phone Mobile users. ++Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including Teams Certified SBC function) so that you can integrate with Operator Connect and Teams Phone Mobile quickly, reliably and in a secure manner. As part of Microsoft Azure, the network elements in Azure Communications Gateway are fully managed and include an availability SLA. This full management simplifies network operations integration and accelerates the timeline for adding new network functions into production. ++## Architecture ++Azure Communications Gateway acts as the edge of your network, ensuring compliance with the requirements of the Operator Connect and Teams Phone Mobile programs. +++To ensure availability, Azure Communications Gateway is deployed into two Azure Regions within a given Geography. It supports both active-active and primary-backup geographic redundancy models to fit with your network design. ++Connectivity between your network and Azure Communications Gateway must meet the Microsoft Teams _Network Connectivity Specification_. Azure Communications Gateway supports Microsoft Azure Peering Service (MAPS) for connectivity to on-premises environments, in line with this specification. ++The sites in your network must have cross-connects between them. You must also set up your routing so that each site in your deployment can route to both Azure Regions. ++Traffic from all enterprises shares a single SIP trunk, using a multi-tenant format. This multi-tenant format ensures the solution is suitable for both the SMB and Enterprise markets. ++> [!IMPORTANT] +> Azure Communications Gateway doesn't store/process any data outside of the Azure Regions where you deploy it. ++## Voice features ++Azure Communications Gateway supports the SIP and RTP requirements for Teams Certified SBCs. It can transform call flows to suit your network with minimal disruption to existing infrastructure. Its voice features include: ++- **Optional direct peering to Emergency Routing Service Providers (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling with Azure Communications Gateway](emergency-calling.md). +- **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and Microsoft Teams. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, for example: ++ - 100rel and early media inter-working + - Downstream call forking with codec changes + - Custom SIP header and SDP manipulation + - DTMF (Dual-Tone Multi-Frequency tones) interworking between inband tones, RFC2833 telephone event and SIP INFO/NOTIFY signaling + - Payload type interworking + - Media transcoding + - Ringback injection ++## API features ++Azure Communications Gateway includes optional API integration features. These features can help you to: ++- Adapt your existing systems to meet the requirements of the Operator Connect and Teams Phone Mobile programs with minimal disruption. +- Provide a consistent look and feel across your Operator Connect and Teams Phone Mobile offerings and the rest of your portfolio. +- Speed up your rollout and monetization of Teams Calling support. ++### CallDuration upload ++The Operator Connect specifications require the Call Duration Records (CDRs) produced by Microsoft Teams to match billing information from your network. You must therefore push call duration data into the Microsoft Teams environment. Azure Communications Gateway pushes this data for you and supports customizable rounding of call duration figures to match your billing systems. ++### API Bridge Number Management Portal ++Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This portal enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project ++The Number Management Portal is available as part of the optional API Bridge feature. ++> [!TIP] +> The API Bridge Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals. ++### API mediation ++Azure Communications Gateway's API Bridge feature includes a flexible custom interface to the Operator Connect APIs. Microsoft Professional Services can create REST or SOAP APIs that adapt the Teams Operator Connect API to your networks' requirements for APIs. These custom APIs can reduce the size of an IT integration project by reducing the changes required in your existing infrastructure. ++The API mediation function is designed to map between CRM and BSS systems in your network and the Teams Operator Connect API. Your CRM and BSS systems must be able to handle the information required by Teams Operator Connect. You must work with Microsoft to determine whether you can use the API mediation feature and to scope the project. ++## Next steps ++- [Learn how Azure Communications Gateway fits into your network](interoperability.md). +- [Learn about onboarding to Microsoft Teams and Azure Communications Gateway's Basic Integration Included Benefit](onboarding.md). +- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md). |
communications-gateway | Plan And Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md | + + Title: Plan and manage costs for Azure Communications Gateway +description: Learn how to plan for and manage costs for Azure Communications Gateway by using cost analysis in the Azure portal. +++++ Last updated : 12/06/2022+++# Plan and manage costs for Azure Communications Gateway ++This article describes how you plan for and manage costs for Azure Communications Gateway. ++After you've started using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. ++Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, you're billed for all Azure services and resources used in your Azure subscription. This billing includes third-party services. ++## Prerequisites ++Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++## Understand the full billing model for Azure Communications Gateway ++Azure Communications Gateway runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other infrastructure costs that might accrue. ++### How you're charged for Azure Communications Gateway ++When you deploy or use Azure Communications Gateway, you'll be charged for your use of the voice features of the product. The charges are based on the number of users assigned to the platform by a series of SBC User meters. The meters include: ++- A "service availability" meter that is charged hourly and includes the use of 999 users for testing and early adoption. +- A per-user meter that charges based on the number of users that are assigned to the deployment. This per-user fee is calculated from the maximum number of users during your billing cycle, excluding the initial 999 users included in the service availability fee. ++If you choose to deploy the API Bridge (for API mediation or the API Bridge Number Management Portal), you'll also be charged for your API Bridge usage. Fees for API Bridge work in the same way as the SBC User meters: a service availability meter and a per-user meter. The number of users charged for the API Bridge is always the same as the number of users charged on the SBC User meters. ++> [!NOTE] +> A user is any telephone number that meets all the following criteria. +> +> - You have provisioned the number within your Operator Connect or Teams Phone Mobile environment. +> - The number is configured for connectivity through Azure Communications Gateway. +> - The number's status is "assigned" in the Operator Connect environment. This includes (but is not limited to) assignment to users, Conferencing bridges, Voice Applications and Third Party applications. +> +> Azure Communications Gateway does not charge for Telephone Numbers (TNs) that are not "assigned" in the Operator Connect environment. ++At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Communications Gateway costs. There's a separate line item for each meter. ++If you've arranged any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters. ++If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++### Other costs that might accrue with Azure Communications Gateway ++You'll need to pay for Azure networking costs, because these costs aren't included in the Azure Communications Gateway meters. ++- If you're connecting to the public internet with Microsoft Azure Peering Service (MAPS), you might need to pay a third party for the cross-connect at the exchange location. +- If you're connecting into Azure as a next hop, you might need to pay vNet peering costs. ++### Costs if you cancel or change your deployment ++If you cancel Azure Communications Gateway, your final bill or invoice will only include charges on service-availability meters for the part of the billing cycle before you cancel. Per-user meters charge for the entire billing cycle. ++You'll need to remove any networking resources that you set up for Azure Communications Gateway. For example, if you're connecting into Azure as a next hop, you'll need to remove the vNet peering. Otherwise, you'll still be charged for those networking resources. ++If you have multiple Azure Communications Gateway deployments and you move users between deployments, these users will count towards meters in both deployments. This double counting only applies to the billing cycle in which you move the subscribers; in the next billing cycle, the users will only count towards meters in their new deployment. ++### Using Azure Prepayment with Azure Communications Gateway ++You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace. ++## Monitor costs ++When you deploy and use Azure Communications Gateway, you incur costs. You can see these costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++When you use cost analysis, you view Azure Communications Gateway costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. ++To view Azure Communications Gateway costs in cost analysis: ++1. Sign in to the Azure portal. +2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis. +3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Azure Communications Gateway. ++Actual monthly costs are shown when you initially open cost analysis. ++To narrow costs for a single service, like Azure Communications Gateway, select **Add filter** and then select **Service name**. Then, select **Azure Communications Gateway**. From here, you can explore costs on your own. ++## Create budgets ++You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy. ++Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++## Export cost data ++You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. Exporting cost data is helpful when you or others need to do further data analysis. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets. ++## Next steps ++- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. |
communications-gateway | Prepare For Live Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md | + + Title: Prepare for live traffic with Azure Communications Gateway +description: After deploying Azure Communications Gateway, you and your onboarding team must carry out further integration work before you can launch your service. ++++ Last updated : 12/14/2022+++# Prepare for live traffic with Azure Communications Gateway ++Before you can launch your Operator Connect or Teams Phone Mobile service, you and your onboarding team must: ++- Integrate Azure Communications Gateway with your network. +- Test your service. +- Prepare for launch. ++In this article, you learn about the steps you and your onboarding team must take. ++> [!TIP] +> In many cases, your onboarding team is from Microsoft, provided through the [Basic Integration Included Benefit](onboarding.md) or through a separate arrangement. ++## Prerequisites ++- You must have [deployed Azure Communications Gateway](deploy.md) using the Microsoft Azure portal. +- You must have [chosen some test numbers](prepare-to-deploy.md#prerequisites). +- You must have a tenant you can use for testing (representing an enterprise customer), and some users in that tenant to whom you can assign the test numbers. +- You must have access to the: + - [Operator Connect portal](https://operatorconnect.microsoft.com/). + - [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant. +- You must be able to manage users in your test tenant. ++## Methods ++In some parts of this article, the steps you must take depend on whether your deployment includes the API Bridge. This article provides instructions for both types of deployment. Choose the appropriate instructions. ++## 1. Connect Azure Communications Gateway to your networks ++1. Configure your infrastructure to meet the call routing requirements described in [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). +1. Configure your network devices to send and receive traffic from Azure Communications Gateway. You might need to configure SBCs, softswitches and access control lists (ACLs). +1. Configure your routers and peering connection to ensure all traffic to Azure Communications Gateway is through Azure Internet Peering for Communications Services (also known as MAPS for Voice). +1. Enable Bidirectional Forwarding Detection (BFD) on your on-premises edge routers to speed up link failure detection. + - The interval must be 150 ms (or 300 ms if you can't use 150 ms). + - With MAPS, BFD must bring up the BGP peer for each Private Network Interface (PNI). +1. Meet any other requirements in the _Network Connectivity Specification_ for Operator Connect or Teams Phone Mobile. ++## 2. Ask your onboarding team to register your test enterprise tenant ++Your onboarding team must register the test enterprise tenant that you chose in [Prerequisites](#prerequisites) with Microsoft Teams. ++1. Provide your onboarding contact with: + - Your company's name. + - Your company's ID ("Operator ID"). + - The ID of the tenant to use for testing. +2. Wait for your onboarding team to confirm that your test tenant has been registered. ++## 3. Assign numbers to test users in your tenant ++1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name has the suffix `azcog`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process. +1. In your test tenant, request service from your company. + 1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant. + 1. Select **Voice** > **Operators**. + 1. Select your company in the list of operators, fill in the form and select **Add as my operator**. +1. In your test tenant, create some test users (if you don't already have suitable users). These users must be licensed for Teams Phone System and in Teams Only mode. +1. Configure emergency locations in your test tenant. +1. Upload numbers in the API Bridge Number Management Portal (if you deployed the API Bridge) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team. ++ # [API Bridge Number Management Portal](#tab/api-bridge) ++ 1. Open the API Bridge Number Management Portal from your list of Azure resources. + 1. Select **Go to Consents**. + 1. Select your test tenant. + 1. From the menu, select **Update Relationship Status**. Set the status to **Agreement signed**. + 1. From the menu, select **Manage Numbers**. + 1. Select **Upload numbers**. + 1. Fill in the fields as required, and then select **Review + upload** and **Upload**. ++ # [Operator Portal](#tab/no-api-bridge) ++ 1. Open the Operator Portal. + 1. Select **Customer Consents**. + 1. Select your test tenant. + 1. Select **Update Relationship**. Set the status to **Agreement signed**. + 1. Select the link for your test tenant. The link opens **Number Management** > **Manage by Tenant**. + 1. Select **Upload Numbers**. + 1. Fill in the fields as required, and then select **Submit**. ++ +1. In your test tenant, assign these numbers to your test users. + 1. Sign in to the Teams Admin Center for your test tenant. + 1. Select **Voice** > **Phone numbers**. + 1. Select a number, then select **Edit**. + 1. Assign the number to a user. + 1. Repeat for all your test users. ++## 4. Carry out integration testing and request changes ++Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh. ++You must test typical call flows for your network. Your onboarding team will provide an example test plan that we recommend you follow. Your test plan should include call flow, failover, and connectivity testing. ++- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft will make the changes for you. +- If you need changes to the configuration of devices in your core network, you must make those changes. ++## 5. Run a connectivity test and upload proof ++Before you can launch, Microsoft Teams requires proof that your network is properly connected to Microsoft's network. ++1. Provide your onboarding team with proof that BFD is enabled. You enabled BFD in [1. Connect Azure Communications Gateway to your networks](#1-connect-azure-communications-gateway-to-your-networks). For example, if you have a Cisco router, you can provide configuration similar to the following. ++ ```text + interface TenGigabitEthernet2/0/0.150 + description private peering to Azure + encapsulation dot1Q 15 second-dot1q 150 + ip vrf forwarding 15 + ip address 192.168.15.17 255.255.255.252 + bfd interval 150 min_rx 150 multiplier 3 ++ router bgp 65020 + address-family ipv4 vrf 15 + network 10.1.15.0 mask 255.255.255.128 + neighbor 192.168.15.18 remote-as 12076 + neighbor 192.168.15.18 fall-over bfd + neighbor 192.168.15.18 activate + neighbor 192.168.15.18 soft-reconfiguration inbound + exit-address-family + ``` ++1. Test failover of the MAPS connections to your network. Your onboarding team will work with you to plan this testing and gather the required evidence. +1. Work with your onboarding team to validate emergency call handling. ++## 6. Get your go-to-market resources approved ++Before you can go live, you must get your customer-facing materials approved by Microsoft Teams. Provide the following to your onboarding team for review. ++- Press releases and other marketing material +- Content for your landing page +- Logo for the Microsoft Teams Operator Directory (200 px by 200 px) +- Logo for the Microsoft Teams Admin Center (170 px by 90 px) ++## 7. Test raising a ticket ++You must test that you can raise tickets in the Azure portal to report problems with Azure Communications Gateway. See [Get support or request changes for Azure Communications Gateway](request-changes.md). ++## 8. Learn about monitoring Azure Communications Gateway ++Your staff can use a selection of key metrics to monitor Azure Communications Gateway. These metrics are available to anyone with the Reader role on the subscription for Azure Communications Gateway. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md). ++## 9. Verify API integration ++Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect API for provisioning. ++# [API Bridge](#tab/api-bridge) ++If you have the API Bridge, your onboarding team can obtain proof automatically. You don't need to do anything. ++# [Without the API Bridge](#tab/no-api-bridge) ++If you don't have the API Bridge, you must provide your onboarding team with proof that you have made successful API calls for: ++- Partner consent +- TN Upload to Account +- Unassign TN +- Release TN ++++## 10. Arrange synthetic testing ++Your onboarding team must arrange synthetic testing of your deployment. This synthetic testing is a series of automated tests lasting at least seven days. It verifies the most important metrics for quality of service and availability. ++## 11. Schedule launch ++Your launch date is the date that you'll appear to enterprises in the Teams Admin Center. Your onboarding team must arrange this date by making a request to Microsoft Teams. ++Your service can be launched on specific dates each month. Your onboarding team must submit the request at least two weeks before your preferred launch date. ++## Next steps ++- Wait for your launch date. +- Learn about [getting support and requesting changes for Azure Communications Gateway](request-changes.md). +- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md). +- Learn about [planning and managing costs for Azure Communications Gateway](plan-and-manage-costs.md). |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | + + Title: Prepare to deploy Azure Communications Gateway +description: Learn how to complete the prerequisite tasks required to deploy Azure Communications Gateway in Azure. ++++ Last updated : 01/10/2022+++# Prepare to deploy Azure Communications Gateway ++This article will guide you through each of the tasks you need to complete before you can deploy Azure Communications Gateway. In order to be successfully deployed, the Azure Communications Gateway has dependencies on the state of your Operator Connect or Teams Phone Mobile environments. +The following sections describe the information you'll need to collect and the decisions you'll need to make prior to deploying Azure Communications Gateway. ++## Prerequisites ++You must have signed an Operator Connect agreement with Microsoft. For more information, see [Operator Connect](https://cloudpartners.transform.microsoft.com/practices/microsoft-365-for-operators/connect). ++You'll need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Basic Integration Included Benefit](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you'll need to arrange an onboarding partner yourself. ++You must ensure you've got two or more numbers that you own which are globally routable. Your onboarding team will require these numbers to configure test lines. ++We strongly recommend that all operators have a support plan that includes technical support, such as a **Microsoft Unified** or **Premier** support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans/). ++## 1. Configure Azure Active Directory in Operator Azure tenancy ++> [!NOTE] +>This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip steps 1 and 2 if you have already onboarded to TPM or OC. ++Operator Connect and Teams Phone Mobile inherit permissions and identities from the Azure Active Directory within the Azure tenant where the Project Synergy app is configured. As such, performing this step within an existing Azure tenant uses your existing identities for fully integrated authentication and is recommended. However, if you need to manage identities for Operator Connect separately from the rest of your organization, complete the following steps in a new dedicated tenant. ++1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin. +1. Select **Azure Active Directory**. +1. Select **Properties**. +1. Scroll down to the Tenant ID field. Your tenant ID will be in the box. Make a note of your tenant ID. +1. Open PowerShell. +1. (If you don't have the Azure Active Directory module installed), run the cmdlet: + ```azurepowershell + Install-Module Azure AD + ``` +1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4. + ```azurepowershell + Connect-AzureAD -TenantId "<AADTenantID>" + New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect" + ``` ++## 2. Allow the Project Synergy application ++Project Synergy allows Operator Connect to access your Azure Active Directory. It's required to allow configuration of Operator Connect or Teams Phone Mobile and to assign users and groups to app-roles for your application. ++1. In your Azure portal, navigate to **Enterprise applications** using the left-hand side menu. Alternatively, you can search for it in the search bar, it will appear under the **Services** subheading. +1. Set the **Application type** filter to **All applications** using the drop-down menu. +1. Select **Apply**. +1. Search for **Project Synergy** using the search bar. The application should appear. +1. Select your **Project Synergy** application. +1. Select **Users and groups** from the left hand side menu. +1. Select **Add user/group**. +1. Specify the user you want to use for setting up Azure Communications Gateway and assign them the **Admin** role. ++## 3. Create an App registration to provide Azure Communications Gateway access to the Operator Connect API ++You must create an App registration to enable Azure Communications Gateway to function correctly. The App registration provides Azure Communications Gateway with access to the Operator Connect API on your behalf. The App registration **must** be created in **your** tenant. ++### 3.1 Create an App registration ++Use the following steps to create an App registration for Azure Communications Gateway: ++1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading. +1. Select **New registration**. +1. Enter an appropriate **Name**. For example: **Azure Communications Gateway service**. +1. Don't change any settings (leaving everything as default). This means: + - **Supported account types** should be set as **Accounts in this organizational directory only**. + - Leave the **Redirect URI** and **Service Tree ID** empty. +1. Select **Register**. ++### 3.2 Configure permissions ++For the App registration that you created in [3.1 Create an App registration](#31-create-an-app-registration): ++1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading. +1. Select the App registration. +1. Select **API permissions**. +1. Select **Add a permission**. +1. Select **APIs my organization uses**. +1. Enter **Project Synergy** in the filter box. +1. Select **Project Synergy**. +1. Select/deselect checkboxes until only the required permissions are selected. The required permissions are: + - Data.Write + - Data.Read + - NumberManagement.Read + - TrunkManagement.Read +1. Select **Add permissions**. +1. Select **Grant admin consent** for ***\<YourTenantName\>***. +1. Select **Yes** to confirm. +++### 3.3 Add the application ID to the Operator Connect Portal ++You must add the application ID to your Operator Connect environment. This step allows Azure Communications Gateway to use the Operator Connect API. ++1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading. +1. Copy the **Application (client) ID** from the Overview page of your new App registration. +1. Log into the [Operator Connect Number Management Portal](https://operatorconnect.microsoft.com/operator/configuration) and add a new **Application Id**, pasting in the value you copied. ++## 4. Create and store secrets ++You must create an Azure secret and allow the App registration to access this secret. This integration allows Azure Communications Gateway to access the Operator Connect API. ++This step guides you through creating a Key Vault to store a secret for the App registration, creating the secret and allowing the App registration to use the secret. ++### 4.1 Create a Key Vault ++The App registration you created in [3. Create an App registration to provide Azure Communications Gateway access to the Operator Connect API](#3-create-an-app-registration-to-provide-azure-communications-gateway-access-to-the-operator-connect-api) requires a dedicated Key Vault. The Key Vault is used to store the secret name and secret value (created in the next steps) for the App registration. ++1. Create a Key Vault. Follow the steps in [Create a Vault](/azure/key-vault/general/quick-create-portal). +1. Provide your onboarding team with the ResourceID and the Vault URI of your Key Vault. +1. Your onboarding team will use the ResourceID to request a Private-Endpoint. That request triggers two approval requests to appear in the Key Vault. +1. Approve these requests. ++### 4.2 Create a secret ++You must create a secret for the App registration while preparing to deploy Azure Communications Gateway and then regularly rotate this secret. ++We recommend you rotate your secrets at least every 70 days for security. For instructions on how to rotate secrets, see [Rotate your Azure Communications Gateway secrets](rotate-secrets.md) ++1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading. +1. Select **Certificates & secrets**. +1. Select **New client secret**. +1. Enter a name for the secret (we suggest that the name should include the date at which the secret is being created). +1. Copy or note down the value of the new secret (you won't be able to retrieve it later). +++### 4.3 Grant Admin Consent to Azure Communications Gateway ++To enable the Azure Communications Gateway service to access the Key Vault, you must grant Admin Consent to the App registration. ++1. Request the Admin Consent URL from your onboarding team. +1. Follow the link. A pop-up window will appear which contains the **Application Name** of the Registered Application. Note down this name. ++### 4.4 Grant your application Key Vault Access ++This step must be performed on your Tenant. It will give the Azure Communications Gateway the ability to read the Operator Connect secrets from your tenant. ++1. Navigate to the Key Vault in the Azure portal. If you can't locate it, search for Key Vault in the search bar, select **Key vaults** from the results, and select your Key Vault. +1. Select **Access Policies** on the left hand side menu. +1. Select **Create**. +1. Select **Get** from the secret permissions column. +1. Select **Next**. +1. Search for the Application Name of the Registered Application created by the Admin Consent process (which you noted down in the previous step), and select the name. +1. Select **Next**. +1. Select **Next** again to skip the **Application** tab. +1. Select **Create**. ++## 5. Create a network design ++Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* you've been issued. You must have two Azure Regions with cross-connect functionality. For more details on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). ++To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](/azure/internet-peering/walkthrough-communications-services-partner). + :::image type="content" source="media/azure-communications-gateway-redundancy.png" alt-text="Network diagram of an Azure Communications Gateway that uses MAPS as its peering service between Azure and an operators network."::: ++## 6. Collect basic information for deploying an Azure Communications Gateway ++ Collect all of the values in the following table for the Azure Communications Gateway resource. ++|**Value**|**Field name(s) in Azure portal**| + ||| + |The Azure subscription to use to create an Azure Communications Gateway resource. You must use the same subscription for all resources in your Azure Communications Gateway deployment. |**Project details: Subscription**| + |The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**| + |The name for the deployment. |**Instance details: Name**| + |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or co-located with the two regions that will be used for handling call traffic. |**Instance details: Region** + |The voice codecs that Azure Communications Gateway will be able to support when communicating with your network. |**Instance details: Supported Codecs**| + |The Unified Communications as a Service (UCaaS) platform(s) Azure Communications Gateway will support. These platforms are Teams Phone Mobile and Operator Connect Mobile. |**Instance details: Supported Voice Platforms**| + |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Services Routing Proxy (US only). |**Instance details: Emergency call handling**| + |The scope at which the auto-generated domain name label is unique. Communications Gateway resources get assigned an auto-generated label which depends on the name of the resource. Selecting **Tenant** will give a resource with the same name in the same tenant but a different subscription the same auto-generated label. Selecting **Subscription** will give a resource with the same name in the same subscription but a different resource group the same auto-generated label. Selecting **Resource Group** will give a resource with the same name in the same resource group the same auto-generated label. Selecting **No Re-use** means the auto-generated label does not depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**| + |The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Instance details: Teams Voicemail Pilot Number**| + |A list of dial strings used for emergency calling.|**Instance details: Emergency Dial Strings**| + |Whether an on-premises Mobile Control Point is in use.|**Instance details: Enable on-premises MCP functionality**| ++++## 7. Collect Service Regions configuration values ++Collect all of the values in the following table for both service regions in which Azure Communications Gateway will run. ++ |**Value**|**Field name(s) in Azure portal**| + ||| + |The Azure regions that will handle call traffic. |**Service Region One/Two: Region**| + |The IPv4 address used by Microsoft Teams to contact your network from this region. |**Service Region One/Two**| + |The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**| + |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**| ++## 8. Collect Test Lines configuration values ++Collect all of the values in the following table for all test lines you want to configure for Azure Communications Gateway. You must configure at least one test line. ++ |**Value**|**Field name(s) in Azure portal**| + ||| + |The name of the test line. |**Name**| + |The phone number of the test line. |**Phone Number**| + |Whether the test line is manual or automated: **Manual** test lines will be used by you and Microsoft staff to make test calls during integration testing. **Automated** test lines will be assigned to Microsoft Teams test suites for validation testing. |**Testing purpose**| ++## 9. Decide if you want tags ++Resource naming and tagging is useful for resource management. It enables your organization to locate and keep track of resources associated with specific teams or workloads and also enables you to more accurately track the consumption of cloud resources by business area and team. ++If you believe tagging would be useful for your organization, design your naming and tagging conventions following the information in the [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/). ++## 9. Get access to Azure Communications Gateway for your Azure subscription ++Access to Azure Communications Gateway is restricted. When you've completed the other steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details. ++## Next steps ++- [Create an Azure Communications Gateway resource](deploy.md) |
communications-gateway | Provision User Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md | + + Title: Set up user roles for Azure Communications Gateway +description: Learn how to configure the user roles required to deploy, manage and monitor your Azure Communications Gateway ++++ Last updated : 12/15/2022 +++# Set up user roles for Azure Communications Gateway ++This article will guide you through how to configure the permissions required for operators in your organization to: ++- Deploy Azure Communications Gateway through the portal +- Raise customer support requests (support tickets) +- Monitor Azure Communications Gateway +- Rotate secrets for Azure Communications Gateway +- Use the API Bridge Number Management Portal for provisioning ++## Prerequisites ++Familiarize yourself with the Azure user roles relevant to Azure Communications Gateway by reading [Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md). ++A list of all available defined Azure roles is available in [Azure built-in roles](../role-based-access-control/built-in-roles.md). ++## 1. Understand the user roles required for Azure Communications Gateway ++Your staff will need different user roles, depending on the tasks they need to carry out. ++|Task | Required user roles or access | +||| +| Deploying Azure Communications Gateway |**Contributor** access to your subscription| +| Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level| +|Monitoring logs and metrics | **Reader** access to your subscription| +|Rotating secrets |**Storage Account Key Operator**, **Contributor** or **Owner** access to your subscription| +|Using the API Bridge Number Management Portal|**Reader** and **Writer** permissions for the Project Synergy enterprise application and permissions to the Azure portal for your subscription| ++## 2. Configure user roles ++You need to use the Azure portal to configure user roles. ++### 2.1 Prepare to assign a user role ++1. Read through [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) and ensure that you: + - Know who needs access. + - Know the appropriate user role or roles to assign them. + - Are signed in with a user that is assigned a role that has role assignments write permission, such as **Owner** or **User Access Administrator** for the subscription. +1. If you're managing access to the API Bridge Number Management Portal, ensure that you're signed in with a user that can change permissions for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md). ++### 2.2 Assign a user role ++1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [1. Understand the user roles required for Azure Communications Gateway](#1-understand-the-user-roles-required-for-azure-communications-gateway). +1. If you're managing access to the API Bridge Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign **Reader** and **Writer** permissions for the Project Synergy application. ++## Next steps ++- Learn how to remove access to the Azure Communications Gateway subscription by [removing Azure role assignments](../role-based-access-control/role-assignments-remove.md). |
communications-gateway | Reliability Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md | + + Title: Reliability in Azure Communications Gateway +description: Find out about reliability in Azure Communications Gateway ++++++ - subject-reliability + - references_regions Last updated : 01/12/2023+++# What is reliability in Azure Communications Gateway? ++Azure Communications Gateway ensures your service is reliable by using Azure redundancy mechanisms and SIP-specific retry behavior. Your network must meet specific requirements to ensure service availability. ++## Azure Communications Gateway's redundancy model ++Each Azure Communications Gateway deployment consists of three separate regions: a Management Region and two Service Regions. This article describes the two different region types and their distinct redundancy models. It covers both regional reliability with availability zones and cross-region reliability with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). ++ Diagram showing two operator sites and the Azure regions for Azure Communications Gateway. Azure Communications Gateway has two service regions and one management region. The service regions connect to the management region and to the operator sites. The management region can be co-located with a service region. ++## Service regions ++Service regions contain the voice and API infrastructure used for handling traffic between Microsoft Teams Phone System and your network. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode. This geo-redundancy is mandated by the Operator Connect and Teams Phone Mobile programs. Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level. ++> [!TIP] +> You must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography. ++These service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end-users should still be able to make and receive calls successfully during any Zone or Regional downtime. ++### Call routing requirements ++Azure Communications Gateway offers a 'successful redial' redundancy model: calls handled by failing peers are terminated, but new calls are routed to healthy peers. This model mirrors the redundancy model provided by Microsoft Teams itself. ++We expect your network to have two geographically redundant sites. Each site should be paired with an Azure Communications Gateway region. The redundancy model relies on cross-connectivity between your network and Azure Communications Gateway service regions. ++ Diagram of two operator sites (operator site A and operator site B) and two service regions (service region A and service region B). Operator site A has a primary route to service region A and a secondary route to service region B. Operator site B has a primary route to service region B and a secondary route to service region A. ++Each Azure Communications Gateway service region provides an SRV record containing all SIP peers within the region. ++Each site in your network must: ++> [!div class="checklist"] +> - Send traffic to its local Azure Communications Gateway service region by default. +> - Locate Azure Communications Gateway peers within a region using DNS-SRV, as outlined in RFC 3263. +> - Make a DNS SRV lookup on the domain name for the service region, for example pstn-region1.xyz.commsgw.azure.example.com. +> - If the SRV lookup returns multiple targets, use the weight and priority of each target to select a single target. +> - Use SIP OPTIONS (or a combination of OPTIONS and SIP traffic) to monitor the availability of the Azure Communications Gateway peers. +> - Send new calls to available Azure Communications Gateway peers. +> - Retry INVITEs that received 408 responses, 503 responses or 504 responses or did not receive responses, by rerouting them to other available peers in the local site. Hunt to the second service region only if all peers in the local service region have failed. ++Your network must not retry calls that receive error responses other than 408, 503 and 504. ++The details of this routing behavior will be specific to your network. You must agree them with your onboarding team during your integration project. +++## Management regions ++Management regions contain the infrastructure used for the ordering, monitoring and billing of Azure Communications Gateway. All infrastructure within these regions is deployed in a zonally redundant manner, meaning that all data is automatically replicated across each Availability Zone within the region. All critical configuration data is also replicated to each of the Service Regions to ensure the proper functioning of the service during an Azure region failure. ++## Availability zone support ++Azure availability zones have a minimum of three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If a local zone fails, regional services, capacity, and high availability are supported by the other zones in the region. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md). ++### Zone down experience for service regions ++During a zone-wide outage, calls handled by the affected zone are terminated, with a brief loss of capacity within the region until the service's self-healing rebalances underlying resources to healthy zones. This self-healing isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state will compensate for a lost zone, using capacity from other zones. Traffic carrying resources are deployed in a zone-redundant manner but at the lowest scale traffic might be handled by a single resource. In this case, the failover mechanisms described in this article rebalance all traffic to the other service region while the resources that carry traffic are redeployed in a healthy zone. ++### Zone down experience for the management region ++ During a zone-wide outage, no action is required during zone recovery. The management region self-heals and rebalances itself to take advantage of the healthy zone automatically. ++## Disaster recovery: fallback to other regions ++This section describes the behavior of Azure Communications Gateway during a region-wide outage. ++### Disaster recovery: cross-region failover for service regions ++During a region-wide outage, the failover mechanisms described in this article (OPTIONS polling and SIP retry on failure) will rebalance all traffic to the other service region, maintaining availability. Microsoft will start restoring regional redundancy. Restoring regional redundancy during extended downtime might require using other Azure regions. If we need to migrate a failed region to another region, we'll consult you before starting any migrations. ++### Disaster recovery: cross-region failover for management regions ++Voice traffic and the API Bridge are unaffected by failures in the management region, because the corresponding Azure resources are hosted in service regions. Users of the API Bridge Number Management Portal might need to sign in again. ++Monitoring services might be temporarily unavailable until service has been restored. If the management region experiences extended downtime, Microsoft will migrate the impacted resources to another available region. ++## Choosing management and service regions ++A single deployment of Azure Communications Gateway is designed to handle your Operator Connect and Teams Phone Mobile traffic within a geographic area. Both service regions should be deployed within the same geographic area (for example North America) to ensure that latency on voice calls remain within the limits required by the Operator Connect and Teams Phone Mobile programs. Consider the following points when you choose your service region locations: ++- Select from the list of available Azure regions. You can see the Azure regions that can be selected as service regions on the [Products by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) page. +- Choose regions near to your own premises and the peering locations between your network and Microsoft to reduce call latency. +- Prefer [regional pairs](/azure/reliability/cross-region-replication-azure#azure-cross-region-replication-pairings-for-all-geographies) to minimize the recovery time if a multi-region outage occurs. ++Choose a management region from the following list: ++- East US +- West Central US +- West Europe +- UK South +- India Central +- Southeast Asia +- Australia East ++Management regions can be co-located with service regions. We recommend choosing the management region nearest to your service regions. ++## Service-level agreements ++The reliability design described in this document is implemented by Microsoft and isn't configurable. For more information on the Azure Communications Gateway service-level agreements (SLAs), see the Azure Communications Gateway SLA. ++## Next steps ++> [!div class="nextstepaction"] +> [Prepare to deploy an Azure Communications Gateway resource](prepare-to-deploy.md) |
communications-gateway | Request Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md | + + Title: Get support or request changes for Azure Communications Gateway +description: This article guides you through how to submit support requests if you have a problem with your service or require changes to it. ++++ Last updated : 01/08/2023+++# Get support or request changes to your Azure Communications Gateway ++If you notice problems with Azure Communications Gateway or you need Microsoft to make changes, you can raise a support request (also known as a support ticket). This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ++Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan. We recommend you have at least a **Microsoft Unified** or **Premier** support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans/). ++## Pre-requisites ++You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Communications Gateway subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level. ++## 1. Generate a support request in the Azure portal ++1. Sign in to the [Azure portal](https://ms.portal.azure.com/). +1. Select the question mark icon in the top menu bar. +1. Select the **Help + support** button. +1. Select **Create a support request**. ++## 2. Enter a description of the problem or the change ++1. Concisely describe your problem or the change you need in the **Summary** box. +1. Select an **Issue type** from the drop-down menu. +1. Select your **Subscription** from the drop-down menu. Choose the subscription where you're noticing the problem or need a change. The support engineer assigned to your case will only be able to access resources in the subscription you specify. If the issue applies to multiple subscriptions, you can mention other subscriptions in your description, or by sending a message later. However, the support engineer will only be able to work on subscriptions to which you have access. +1. A new **Service** option will appear giving you the option to select either **My services** or **All services**. Select **My services**. +1. In **Service type** select **Azure Communications Gateway** from the drop-down menu. +1. A new **Problem type** option will appear. Select the problem type that most accurately describes your issue from the drop-down menu. +1. A new **Problem subtype** option will appear. Select the problem subtype that most accurately describes your issue from the drop-down menu. +1. Select **Next**. ++## 3. Assess the recommended solutions ++Based on the information you provided, we might show you recommended solutions you can use to try to resolve the problem. In some cases, we might even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems. ++If you're still unable to resolve the issue, continue creating your support request by selecting **Next**. ++## 4. Enter additional details ++In this section, we collect more details about the problem or the change and how to contact you. Providing thorough and detailed information in this step helps us route your support request to the right engineer. For more information, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ++## 5. Review and create your support request ++Before creating your request, review the details and diagnostics that you'll send to support. If you want to change your request or the files you've uploaded, select **Previous** to return to any tab. When you're happy with your request, select **Create**. ++## Next steps ++Learn how to [Manage an Azure support request](../azure-portal/supportability/how-to-manage-azure-support-request.md). + |
communications-gateway | Rotate Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/rotate-secrets.md | + + Title: Rotate your Azure Communications Gateway secrets +description: Learn how to rotate your secrets to keep your Azure Communications Gateway secure. ++++ Last updated : 01/12/2023+++# Rotate your Azure Communications Gateway secrets ++This article will guide you through how to rotate secrets for your Azure Communications Gateway. It's important to ensure that secrets are rotated regularly, and that you're aware and familiar with the mechanism for rotating them. Being familiar with this procedure is important because you may sometimes be required to perform an immediate rotation, for example, if the secret was leaked. Our recommendation is that these secrets are rotated at least **every 70 days**. ++Azure Communication Gateway uses an App registration to manage access to the Operator Connect API. This App registration uses secrets stored and managed in your subscription. For more information, see [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md). ++## Prerequisites ++You must know the name of the App registration and the Key Vault you created in [Prepare to deploy Azure Communications Gateway](deploy.md). We recommended using **Azure Communications Gateway service** as the name of the App registration. ++## 1. Rotate your secret for the App registration. ++We store both the secret and its associated identity, but only the secret needs to be rotated. ++1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as a **Storage Account Key Operator**, **Contributor** or **Owner**. +1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading. +1. In the App registrations search box, type **Azure Communications Gateway service** (or the name of the App registration if you chose a different name). +1. Select the application. +1. In the left hand menu, select **Certificates and secrets**. +1. You should see the secret you created in [Prepare to deploy your Azure Communications Gateway](prepare-to-deploy.md). + > [!NOTE] + >If you need to immediately deactivate a secret and make it un-usable, select the bin icon to the right of the secret. +1. Select **New client secret**. +1. Enter a name for the secret (we suggest that the name should include the date at which the secret is being created). +1. Enter an expiry date. The expiry date should sync with your rotation schedule. +1. Select **Add**. +1. Copy or note down the value of the new secret (you won't be able to retrieve it later). If you navigate away from the page or refresh without collecting the value of the secret, you'll need to create a new one. ++## 2. Update your Key Vault with the new secret value ++Azure Key Vault is a cloud service for securely storing and accessing secrets. When you create a new secret for your App registration, you must add the value to your corresponding Key Vault. Add the value as a new version of the existing secret in the Key Vault. Azure Communications Gateway starts using the new value as soon as it makes a request for the value of the secret. ++1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as a **Storage Account Key Operator**, **Contributor** or **Owner**. +1. Navigate to **Key Vaults** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **Key Vaults**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading. +1. Select the relevant Key Vault. +1. In the left hand menu, select **Secrets**. +1. Select the secret you're updating from the list. +1. In the top navigation menu, select **New version**. +1. In the **Secret value** textbox, enter the secret value you noted down in the previous procedure. +1. (Optional) Enter an expiry date for your secret. The expiry date should sync with your rotation schedule. +1. Select **Create**. +++## Next steps ++- Learn how [Azure Communications Gateway keeps your data secure](security.md). |
communications-gateway | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md | + + Title: Security and Azure Communications Gateway +description: Understand how Microsoft keeps your Azure Communications Gateway and user data secure ++++ Last updated : 01/27/2023++++# Security and Azure Communications Gateway ++The customer data Azure Communications Gateway handles can be split into: ++- Content data, such as media for voice calls. +- Customer data present in call metadata. ++## Encryption between Microsoft Teams and Azure Communications Gateway ++All traffic between Azure Communications Gateway and Microsoft Teams is encrypted. SIP traffic is encrypted using TLS. Media traffic is encrypted using SRTP. ++## Data retention, data security and encryption at rest ++Azure Communications Gateway doesn't store content data, but it does store customer data and provide statistics based on it. This data is stored for a maximum of 30 days. After this period, it's no longer accessible to perform diagnostics or analysis of individual calls. Anonymized statistics and logs produced based on customer data will continue to be available beyond the 30 days limit. ++Azure Communications Gateway doesn't support [Customer Lockbox for Microsoft Azure](/azure/security/fundamentals/customer-lockbox-overview). However Microsoft engineers can only access data on a just-in-time basis, and only for diagnostic purposes. ++Azure Communications Gateway stores all data at rest securely, including any customer data that has to be temporarily stored, such as call records. It uses standard Azure infrastructure, with platform-managed encryption keys, to provide server-side encryption compliant with a range of security standards including FedRAMP. For more information, see [encryption of data at rest](../security/fundamentals/encryption-overview.md). ++## Next steps ++- Learn about [how Azure Communications Gateway communicates with Microsoft Teams and your network](interoperability.md). + |
container-instances | Container Instances Region Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md | The following regions and maximum resources are available to container groups wi | Region | Max CPU | Max Memory (GB) | VNET Max CPU | VNET Max Memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | | -- | :: | :: | :-: | :--: | :-: | :-: | :-: | | Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y |-| Australia Southeast | 4 | 14 | 16 | 50 | 50 | N/A | N | -| Brazil South | 4 | 16 | 2 | 16 | 50 | N/A | Y | -| Brazil South | 4 | 16 | 2 | 8 | 50 | N/A | Y | +| Australia Southeast | 4 | 16 | 4 | 16 | 50 | N/A | N | +| Brazil South | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | | Canada East | 4 | 16 | 4 | 16 | 50 | N/A | N |-| Central India | 4 | 16 | 4 | 4 | 50 | V100 | N | +| Central India | 4 | 16 | 4 | 16 | 50 | V100 | N | | Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | | East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N | | East US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | | France Central | 4 | 16 | 4 | 16 | 50 | N/A | Y|-| Germany West Central | 4 | 16 | 16 | 50 | 50 | N/A | Y | +| Germany West Central | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y |-| Japan West | 4 | 16 | 16 | 50 | 50 | N/A | N | -| Jio India West | 4 | 16 | 16 | 50 | 50 | N/A | N | -| Korea Central | 4 | 16 | 16 | 50 | 50 | N/A | N | +| Japan West | 4 | 16 | 4 | 16 | 50 | N/A | N | +| Jio India West | 4 | 16 | 4 | 16 | 50 | N/A | N | +| Korea Central | 4 | 16 | 4 | 16 | 50 | N/A | N | | North Central US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | N | | North Europe | 4 | 16 | 4 | 16 | 50 | K80 | Y | | Norway East | 4 | 16 | 4 | 16 | 50 | N/A | N | The following regions and maximum resources are available to container groups wi | West US | 4 | 16 | 4 | 16 | 50 | N/A | N | | West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N |-| West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N | The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview). |
cosmos-db | Concepts Burstable Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-burstable-compute.md | -# Burstable compute +# Burstable compute in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Concepts Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-cluster.md | -# Clusters +# Clusters in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Concepts Columnar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-columnar.md | -# Compress data with columnar tables +# Compress data with columnar tables in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Concepts Connection Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-connection-pool.md | -# Azure Cosmos DB for PostgreSQL connection pooling +# Connection pooling in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Concepts Performance Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-performance-tuning.md | -# Performance tuning +# Performance tuning in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Concepts Row Level Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-row-level-security.md | -# Row-level security +# Row-level security in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Concepts Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-upgrade.md | -# Cluster upgrades +# Cluster upgrades in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto App Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-app-type.md | -# Determining Application Type +# Determining application type for Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Compute Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-compute-quota.md | -# Change compute quotas from the Azure portal +# Change compute quotas in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-connect.md | -# Connect to a cluster +# Connect to a cluster in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-high-availability.md | -# Configure high availability +# Configure high availability in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Ingest Azure Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-blob-storage.md | -# How to ingest data using pg_azure_storage +# How to ingest data using pg_azure_storage in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Ingest Azure Data Factory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-data-factory.md | -# How to ingest data by using Azure Data Factory +# How to ingest data by using Azure Data Factory in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Ingest Azure Stream Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-stream-analytics.md | -# How to ingest data by using Azure Stream Analytics +# How to ingest data by using Azure Stream Analytics in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Modify Distributed Tables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-modify-distributed-tables.md | -# Distribute and modify tables +# Distribute and modify tables in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Read Replicas Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-read-replicas-portal.md | -# Create and manage read replicas +# Create and manage read replicas in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Restore Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restore-portal.md | -# Point-in-time restore of a cluster +# Point-in-time restore of a cluster in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Scale Grow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-grow.md | -# Scale a cluster +# Scale a cluster in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Scale Initial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-initial.md | -# Pick initial size for cluster +# Pick initial size for cluster in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Scale Rebalance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-rebalance.md | -# Rebalance shards in cluster +# Rebalance shards in cluster in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Shard Count | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-shard-count.md | -# Choose shard count +# Choose shard count in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Table Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-table-size.md | -# Determine table and relation size +# Determine table and relation size in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-upgrade.md | -# Upgrade cluster +# Upgrade cluster in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Howto Useful Diagnostic Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-useful-diagnostic-queries.md | -# Useful Diagnostic Queries +# Useful diagnostic queries in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Build Scalable Apps Classify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-classify.md | -# Classify application workload +# Classify application workload in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Build Scalable Apps Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-concepts.md | -# Fundamental concepts for scaling +# Fundamental concepts for scaling in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Build Scalable Apps Model High Throughput | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-high-throughput.md | -# Model high-throughput transactional apps +# Model high-throughput transactional apps in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Build Scalable Apps Model Multi Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-multi-tenant.md | -# Model multi-tenant SaaS apps +# Model multi-tenant SaaS apps in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Build Scalable Apps Model Real Time | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-real-time.md | -# Model real-time analytics apps +# Model real-time analytics apps in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Build Scalable Apps Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-overview.md | -# Build scalable apps +# Build scalable apps in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Connect Psql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-connect-psql.md | -# Connect to a cluster with psql +# Connect to a cluster with psql - Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Distribute Tables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-distribute-tables.md | -# Create and distribute tables +# Create and distribute tables in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
cosmos-db | Quickstart Run Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-run-queries.md | -# Run queries +# Run queries in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] |
databox-online | Azure Stack Edge Gpu 2210 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2210-release-notes.md | If you have questions or concerns, [open a support case through the Azure portal | No. | Feature | Issue | Workaround/comments | | | | | | |**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |+|**2.**|Azure Kubernetes service on Azure Stack Edge |When updating your device from 2209 to 2210, after the update is complete, the Kubernetes worker node may go down. |Updating to 2301 will resolve this issue. | ## Known issues from previous releases |
databox-online | Azure Stack Edge Gpu 2301 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2301-release-notes.md | + + Title: Azure Stack Edge 2301 release notes +description: Describes critical open issues and resolutions for the Azure Stack Edge running 2301 release. +++ +++ Last updated : 01/31/2023++++# Azure Stack Edge 2301 release notes +++The following release notes identify the critical open issues and the resolved issues for the 2301 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable. ++The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes. ++This article applies to the **Azure Stack Edge 2301** release, which maps to software version **2.2.2162.730**. ++## Supported update paths ++This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318). ++You can update to the latest version using the following update paths: ++| Current version | Update to | Then apply | +| --| --| --| +|2205 and earlier |2207 |2301 +|2207 and later |2301 | ++## What's new ++The 2301 release has the following new features and enhancements: ++- Starting March 2023, Azure Stack Edge devices will be required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible. +- Beginning this release, you can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. ++## Known issues from previous releases ++The following table provides a summary of known issues carried over from the previous releases. ++| No. | Feature | Issue | Workaround/comments | +| | | | | +| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. | +| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.| +|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï| +|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.| +|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.|| +|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.| +|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.| +|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).| +|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.| +|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).| +|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).| +|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.| +|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.| +|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> | +|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| | +|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || +|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| +|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)| +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | +|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | +|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | +|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). | +|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. | +|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | +|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). | ++## Next steps ++- [Update your device](azure-stack-edge-gpu-install-update.md) |
databox-online | Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md | To configure the network for a 2-node device, follow these steps on the first no * On 25-Gbps interfaces, you can set the RDMA (Remote Direct Access Memory) mode to iWarp or RoCE (RDMA over Converged Ethernet). Where low latencies are the primary requirement and scalability is not a concern, use RoCE. When latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate. * Serial number for any port corresponds to the node serial number. + > [!IMPORTANT] + > For a two node cluster, only the network interfaces with a set IP address are supported by the network topology. Once you apply the network settings, select **Next: Advanced networking >** to configure your network topology. |
databox-online | Azure Stack Edge Gpu Install Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md | The procedure described in this article was performed using a different version ## About latest updates -The current update is Update 2210. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are: +The current update is Update 2301. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are: -- Device software version: Azure Stack Edge 2210 (2.2.2111.1002)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2210 (2.2.2111.1002)-- Kubernetes server version: v1.23.8+- Device software version: Azure Stack Edge 2310 (2.2.2162.730) +- Device Kubernetes version: Azure Stack Kubernetes Edge 2301 (2.2.2162.730) +- Kubernetes server version: v1.24.6 - IoT Edge version: 0.1.0-beta15-- Azure Arc version: 1.7.18+- Azure Arc version: 1.8.14 - GPU driver version: 515.65.01 - CUDA version: 11.7 For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2209-release-notes.md). -**To apply 2210 update, your device must be running version 2207 or later.** +**To apply 2301 update, your device must be running version 2207 or later.** - If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2210.+- You can update to 2207 from 2106 or later, and then install 2301. ++### Update Azure Kubernetes service on Azure Stack Edge ++> [!IMPORTANT] +> Use the following procedure only if you are an SAP or a PMEC customer. ++If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2301. ++Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2301: ++1. Update your device version to 2301. +1. Update your Kubernetes version to 2210. +1. Update your Kubernetes version to 2301. ++If you are running 2210, you can update both your device version and Kubernetes version directly to 2301. ++In Azure portal, the process will require two clicks, the first update gets your device version to 2301 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2301. ++From the local UI, you will have to run each update separately: update the device version to 2301, then update Kubernetes version to 2210, and then update Kubernetes version to 2301. ### Updates for a single-node vs two-node Do the following steps to download the update from the Microsoft Update Catalog. 2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**. - The update listing appears as **Azure Stack Edge Update 2210**. + The update listing appears as **Azure Stack Edge Update 2301**. <!----> This procedure takes around 20 minutes to complete. Perform the following steps 5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration. -6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2210**. +6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2301**. 7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update. |
databox-online | Azure Stack Edge Move To Self Service Iot Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-move-to-self-service-iot-edge.md | Title: Move workloads from Azure Stack Edge's managed IoT Edge to an IoT Edge solution on a Linux VM + Title: Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM description: Describes steps to move workloads from Azure Stack Edge to a self-service IoT Edge solution on a Linux VM. -# Move workloads from Azure Stack Edge's managed IoT Edge to an IoT Edge solution on a Linux VM +# Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)] |
databox | Data Box Disk Troubleshoot Data Upload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-troubleshoot-data-upload.md | + + Title: Review upload errors from Azure Data Box Disk +description: Describes review and follow-up for errors during uploads from an Azure Data Box Disk device to the Azure cloud. ++++++ Last updated : 01/24/2023++++# Review copy errors in uploads from Azure Data Box Disk ++This article describes review and follow-up for errors that occasionally prevent files from uploading to the Azure cloud from an Azure Data Box Disk device. ++The error notification and options vary depending on whether you can fix the error in the current upload: ++- **Retryable errors** - You can fix many types of copy error and resume the upload. The data is then successfully uploaded in your current order. + + + An example of a retryable error is when Large File Shares are not enabled for a storage account that requires shares with data more than 5 TiB. To resolve this, you will need to enable this setting and then confirm to resume data copy. This type of error is referred to as a *retryable error* in the discussion that follows. ++- **Non-retryable errors** - These are errors that can't be fixed. For those errors, the upload pauses to give you a chance to review the errors. But the order completes without the data that failed to upload, and the data is secure erased from the device. You'll need to create a new order after you resolve the issues in your data. ++ An example of a non-retryable error is if a blob storage container is configured as Write Once, Read Many (WORM). Upload of any blobs that are already stored in the container will fail. This type of error is referred to as a *non-retryable error* in the discussion that follows. ++## Upload errors notification ++When a file upload fails because of an error, you'll receive a notification in the Azure portal. You can tell whether the error can be fixed by the status and options in the order overview. ++**Retryable errors**: If you can fix the error in the current order, the notification looks similar to the following one. The current order status is **Data copy halted**. You can either choose to resolve the error or proceed with data erasure without making any change. If you select **Resolve error**, a **Resolve error** screen will tell you how to resolve each error. For step-by-step instructions, see [Review errors and proceed](#review-errors-and-proceed). ++ ++ +**Non-retryable errors:** If the error can't be fixed in the current order, the notification looks similar to the following one. The current order status is **Data copy halted for disks in this order. Provide your input by selecting Resolve in Data copy details**. The errors are listed in the data copy log, which you can open using the **Copy Log Path**. For guidance on resolving the errors, see [Summary of upload errors](#summary-of-upload-errors). ++ +++You can't fix these errors. The upload has completed with errors. The notification lets you know about any configuration issues you need to fix before you try another upload via network transfer or a new import order. ++After you review the errors and confirm you're ready to proceed, the data is secure erased from the device. If you don't respond to the notification, the order is completed automatically after 14 days. +++## Review errors and proceed ++How you proceed with an upload depends on whether the errors can be fixed and the current upload resumed (see **Retryable errors** tab), or the errors can't be fixed in the current order (see the **Non-retryable errors** tab). ++# [Retryable errors](#tab/retryable-errors) ++When a retryable error occurs during an upload, you receive a notification with instructions for fixing the error. If you can't fix the error, or prefer not to, you can proceed with the order without fixing the errors. ++To resolve retryable copy errors during an upload, do these steps: ++1. Open your order in the Azure portal. ++ If any retryable copy errors prevented files from uploading, you'll see the following notification. The current order status will be **Data copy halted for disks in this order**. ++  ++1. Select **Resolve** to view help for the errors. ++  ++ Your screen will look similar to the one below. In the example, the **Enable large file share** error can be resolved by toggling **Not enabled** for each storage account. ++  +++ You can then select **Proceed with data copy** or **Skip and proceed with data erasure**. If you opt for **Proceed with data copy**, you can select if you want to address the error for **Selected disk** or **All disks**. + +  ++1. After you resolve the errors, select the check box by **I confirm that the errors have been resolved**. Then select **Proceed**. ++ The order status changes to **Data copy error resolved**. The data copy will proceed within 24 hours. ++ > [!NOTE] + > If you don't resolve all of the retryable errors, this process will repeat after the data copy proceeds. To proceed without resolving any of the retryable errors, select **Skip and proceed with data erasure** on the **Overview** screen. +++# [Non-retryable errors](#tab/non-retryable-errors) ++The following errors can't be resolved in the current order. The order will be completed automatically after 14 days. By acting on the notification, you can move things along more quickly. ++If the error can't be fixed in the current order, the notification looks similar to the following one. The current order status is **Data copy halted for disks in this order. Provide your input by selecting Resolve in Data copy details**. The errors are listed in the data copy log, which you can open using the **Copy Log Path**. For guidance on resolving the errors, see [Summary of upload errors](#summary-of-upload-errors). ++ ++++## Summary of upload errors ++Review the summary tables on the **Retryable errors** tab or the **Non-retryable errors** tab to find out how to resolve or follow up on data copy errors that occurred during your upload. ++# [Retryable errors](#tab/retryable-errors) ++When the following errors occur, you can resolve the errors and include the files in the current data upload. +++|Error message |Error description |Error resolution | +|||--| +|Large file share not enabled on account |Large file shares aren’t enabled on one or more storage accounts. Resolve the error and resume data copy, or skip to data erasure and complete the order. | Large file shares are not enabled on the indicated storage accounts. Select the option highlighted to enable quota up to 100 TiB per share.| +|Unknown user error |An error has halted the data copy. Contact Support for details on how to resolve the error. Alternatively, you may skip to data erasure and review copy and error logs for the order for the list of files that weren’t copied. |**Error during data copy**<br>Data copy is halted due to an error. [Contact Support](data-box-disk-contact-microsoft-support.md) for details on how to resolve the error. After the error is resolved, confirm to resume data copy. | ++For more information about the data copy log's contents, see [Use logs to troubleshoot upload issues in Azure Data Box Disk](data-box-disk-troubleshoot-upload.md). ++Other REST API errors might occur during data uploads. For more information, see [Common REST API error codes](/rest/api/storageservices/common-rest-api-error-codes). +++# [Non-retryable errors](#tab/non-retryable-errors) ++The following non-retryable errors result in a notification: ++|Error category |Error code |Error message | +|-|--|| +|UploadErrorCloudHttp |400 |Bad Request (file name not valid) [Learn more](#bad-request-file-name-not-valid).| +|UploadErrorCloudHttp |400 |The value for one of the HTTP headers is not in the correct format. [Learn more](#the-value-for-one-of-the-http-headers-is-not-in-the-correct-format).| +|UploadErrorCloudHttp |409 |This operation is not permitted as the blob is immutable due to a policy. [Learn more](#this-operation-is-not-permitted-as-the-blob-is-immutable-due-to-policy).| +|UploadErrorCloudHttp |409 |The total provisioned capacity of the shares cannot exceed the account maximum size limit. [Learn more](#the-total-provisioned-capacity-of-the-shares-cannot-exceed-the-account-maximum-size-limit).| +|UploadErrorCloudHttp |409 |The blob type is invalid for this operation. [Learn more](#the-blob-type-is-invalid-for-this-operation).| +|UploadErrorCloudHttp |409 |There is currently a lease on the blob and no lease ID was specified in the request. [Learn more](#there-is-currently-a-lease-on-the-blob-and-no-lease-id-was-specified-in-the-request).| +|UploadErrorManagedConversionError |409 |The size of the blob being imported is invalid. The blob size is `<blob-size>` bytes. Supported sizes are between 20,971,520 Bytes and 8,192 GiB. [Learn more](#the-size-of-the-blob-being-imported-is-invalid-the-blob-size-is-blob-size-bytes-supported-sizes-are-between-20971520-bytes-and-8192-gib)| +++For more information about the data copy log's contents, see [Use logs to troubleshoot upload issues in Azure Data Box Disk](data-box-disk-troubleshoot-upload.md). ++Other REST API errors might occur during data uploads. For more information, see [Common REST API error codes](/rest/api/storageservices/common-rest-api-error-codes). ++> [!NOTE] +> The **Follow-up** sections in the error descriptions describe how to update your data configuration before you place a new import order or perform a network transfer. You can't fix these errors in the current upload. +++### Bad Request (file name not valid) ++**Error category:** UploadErrorCloudHttp ++**Error code:** 400 ++**Error description:** Most file naming issues are caught during the **Prepare to ship** phase or fixed automatically during the upload (resulting in a **Copy with warnings** status). When an invalid file name is not caught, the file fails to upload to Azure. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new order, rename the listed files to meet naming requirements for Azure Files. For naming requirements, see [Directory and File Names](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names). +++### The value for one of the HTTP headers is not in the correct format ++**Error category:** UploadErrorCloudHttp ++**Error code:** 400 ++**Error description:** The listed blobs couldn't be uploaded because they don't meet format or size requirements for blobs in Azure storage. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, ensure that: ++- The listed page blobs align to the 512-byte page boundaries. ++- The listed block blobs do not exceed the 4.75-TiB maximum size. +++### This operation is not permitted as the blob is immutable due to policy ++**Error category:** UploadErrorCloudHttp ++**Error code:** 409 ++**Error description:** If a blob storage container is configured as Write Once, Read Many (WORM), upload of any blobs that are already stored in the container will fail. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, make sure the listed blobs are not part of an immutable storage container. For more information, see [Store business-critical blob data with immutable storage](../storage/blobs/immutable-storage-overview.md). +++### The total provisioned capacity of the shares cannot exceed the account maximum size limit ++**Error category:** UploadErrorCloudHttp ++**Error code:** 409 ++**Error description:** The upload failed because the total size of the data exceeds the storage account size limit. For example, the maximum capacity of a FileStorage account is 100 TiB. If total data size exceeds 100 TiB, the upload will fail. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, make sure the total capacity of all shares in the storage account will not exceed the size limit of the storage account. For more information, see [Azure storage account size limits](data-box-limits.md#azure-storage-account-size-limits). +++### The blob type is invalid for this operation ++**Error category:** UploadErrorCloudHttp ++**Error code:** 409 ++**Error description:** Data import to a blob in the cloud will fail if the destination blob's data or properties are being modified. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, make sure there is no concurrent modification of the listed blobs or their properties during the upload. ++### There is currently a lease on the blob and no lease ID was specified in the request ++**Error category:** UploadErrorCloudHttp ++**Error code:** 409 ++**Error description:** Data import to a blob in the cloud will fail if the destination blob has an active lease. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, ensure that the listed blobs do not have an active lease. For more information, see [Pessimistic concurrency for blobs](../storage/blobs/concurrency-manage.md?tabs=dotnet#pessimistic-concurrency-for-blobs). +++### The size of the blob being imported is invalid. The blob size is `<blob-size>` Bytes. Supported sizes are between 20,971,520 Bytes and 8,192 GiB. ++**Error category:** UploadErrorManagedConversionError ++**Error code:** 409 ++**Error description:** The listed page blobs failed to upload because they are not a size that can be converted to a Managed Disk. To be converted to a Managed Disk, a page blob must be from 20 MB (20,971,520 Bytes) to 8192 GiB in size. ++**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, make sure each listed blob is from 20 MB to 8192 GiB in size. ++++## Next steps ++- [Verify data upload to Azure](data-box-disk-deploy-upload-verify.md) |
ddos-protection | Ddos Diagnostic Alert Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-diagnostic-alert-templates.md | In this article, you'll learn how to configure diagnostic logging alerts through ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you can complete the steps in this guide, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address. +- [DDoS Network Protection](manage-ddos-protection.md) must be enabled on a virtual network or [DDoS IP Protection (Preview)](manage-ddos-protection-powershell-ip.md) must be enabled on a public IP address. - In order to use diagnostic logging, you must first create a [Log Analytics workspace with diagnostic settings enabled](ddos-configure-log-analytics-workspace.md). - DDoS Protection monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this guide, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. |
defender-for-cloud | Integration Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md | To deploy the MDE unified solution, you'll need to use the [REST API call](#enab > [!NOTE] > If the status is **Off**, use the instructions in [Users who've never enabled the integration with Microsoft Defender for Endpoint for Windows](#users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows). -1. Select **Fix** to see the components that are not enabled. +1. Select **Fix** to see the components that aren't enabled. :::image type="content" source="./media/integration-defender-for-endpoint/fix-defender-for-endpoint.png" alt-text="Screenshot of Fix button that enables Microsoft Defender for Endpoint support."::: The MDE agent unified solution is deployed to all of the machines in the selecte #### Linux -You'll deploy Defender for Endpoint to your Linux machines in one of two ways - depending on whether you've already deployed it to your Windows machines: +You'll deploy Defender for Endpoint to your Linux machines in one of these ways, depending on whether you've already deployed it to your Windows machines: ++- Enable for a specific subscription in the Azure portal environment settings + - [Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows](#existing-users-with-defender-for-clouds-enhanced-security-features-enabled-and-microsoft-defender-for-endpoint-for-windows) + - [New users who never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows) +- [Enable for multiple subscriptions in the Azure portal dashboard](#enable-for-multiple-subscriptions-in-the-azure-portal-dashboard) +- Enable for multiple subscriptions with a PowerShell script -- [Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows](#existing-users-with-defender-for-clouds-enhanced-security-features-enabled-and-microsoft-defender-for-endpoint-for-windows)-- [New users who never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows) ##### Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows If you've already enabled the integration with **Defender for Endpoint for Windo > [!NOTE] > If the status is **Off** isn't selected, use the instructions in [Users who've never enabled the integration with Microsoft Defender for Endpoint for Windows](#users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows). -1. Select **Fix** to see the components that are not enabled. +1. Select **Fix** to see the components that aren't enabled. :::image type="content" source="./media/integration-defender-for-endpoint/fix-defender-for-endpoint.png" alt-text="Screenshot of Fix button that enables Microsoft Defender for Endpoint support."::: If you've never enabled the integration for Windows, endpoint protection enables In addition, in the Azure portal you'll see a new Azure extension on your machines called `MDE.Linux`. +##### Enable for multiple subscriptions in the Azure portal dashboard ++If one or more of your subscriptions don't have Endpoint protections enabled for Linux machines, you'll see an insight panel in the Defender for Cloud dashboard. The insight panel tells you about subscriptions that have Defender for Endpoint integration enabled for Windows machines, but not for Linux machines. You can use the insight panel to see the affected subscriptions with the number of affected resources in each subscription. Subscriptions that don't have Linux machines show no affected resources. You can then select the subscriptions to enable endpoint protection for Linux integration. ++After you select **Enable** in the insight panel, Defender for Cloud: ++- Automatically onboards your Linux machines to Defender for Endpoint in the selected subscriptions. +- Detects any previous installations of Defender for Endpoint and reconfigure them to integrate with Defender for Cloud. ++Use the [Defender for Endpoint status workbook](https://aka.ms/MDEStatus) to verify installation and deployment status of Defender for Endpoint on a Linux machine. ++##### Enable for multiple subscriptions with a PowerShell script ++Use our [PowerShell script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/Enable%20MDE%20Integration%20for%20Linux) from the Defender for Cloud GitHub repository to enable endpoint protection on Linux machines that are in multiple subscriptions. + ### Enable the MDE unified solution at scale You can also enable the MDE unified solution at scale through the supplied REST API version 2022-05-01. For full details, see the [API documentation](/rest/api/defenderforcloud/settings/update?tabs=HTTP). |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you'll find them in the [What's | Planned change | Estimated date for change | |--|--|-| [Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-to-be-released-for-general-availability-ga) | January 2023 | +| [Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-to-be-released-for-general-availability-ga) | February 2023 | +| [The built-in policy [Preview]: Private endpoint should be configured for Key Vault is set to be deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-is-set-to-be-deprecated) | January 2023 | | [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-to-be-deprecated) | January 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 | +### Recommendation to find vulnerabilities in running container images to be released for General Availability (GA) ++**Estimated date for change: February 2023** ++The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation is currently in preview. While a recommendation is in preview, it doesn't render a resource unhealthy and isn't included in the calculations of your secure score. ++We recommend that you use the recommendation to remediate vulnerabilities in your containers so that the recommendation won't affect your secure score when the recommendation is released as GA. Learn about [recommendation remediation](implement-security-recommendations.md). + ### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated **Estimated date for change: January 2023** The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_P |--|--|--| | Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year, enabling you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low | -### Recommendation to find vulnerabilities in running container images to be released for General Availability (GA) --**Estimated date for change: January 2023** --The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation is currently in preview. While a recommendation is in preview, it doesn't render a resource unhealthy and isn't included in the calculations of your secure score. --We recommend that you use the recommendation to remediate vulnerabilities in your containers so that the recommendation won't affect your secure score when the recommendation is released as GA. Learn about [recommendation remediation](implement-security-recommendations.md). - ### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault is set to be deprecated **Estimated date for change: January 2023** |
defender-for-iot | Faqs Ot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md | Last updated 07/07/2022 This article provides a list of frequently asked questions and answers about OT networks in Defender for IoT. -## Our organization uses proprietary non-standard industrial protocols. Are they supported? +## Our organization uses proprietary non-standard industrial protocols. Are they supported? Microsoft Defender for IoT provides comprehensive protocol support. In addition to embedded protocol support, you can secure IoT and OT devices running proprietary and custom protocols, or protocols that deviate from any standard. Use the Horizon Open Development Environment (ODE) SDK, to create dissector plugins that decode network traffic based on defined protocols. Traffic is analyzed by services to provide complete monitoring, alerting, and reporting. Use Horizon to: - Expand visibility and control without the need to upgrade to new versions.-- Secure proprietary information by developing on-site as an external plugin. +- Secure proprietary information by developing on-site as an external plugin. - Localize text for alerts, events, and protocol parameters. -This unique solution for developing protocols as plugins, doesn't require dedicated developer teams or version releases in order to support a new protocol. Developers, partners, and customers can securely develop protocols and share insights and knowledge using Horizon. +This unique solution for developing protocols as plugins, doesn't require dedicated developer teams or version releases in order to support a new protocol. Developers, partners, and customers can securely develop protocols and share insights and knowledge using Horizon. ## Do I have to purchase hardware appliances from Microsoft partners?-Microsoft Defender for IoT sensor runs on specific hardware specs as described in the [Hardware Specifications Guide](./how-to-identify-required-appliances.md), customers can purchase certified hardware from Microsoft partners or use the supplied bill of materials (BOM) and purchase it on their own. ++Microsoft Defender for IoT sensor runs on specific hardware specs as described in the [Hardware Specifications Guide](./how-to-identify-required-appliances.md), customers can purchase certified hardware from Microsoft partners or use the supplied bill of materials (BOM) and purchase it on their own. Certified hardware has been tested in our labs for driver stability, packet drops and network sizing. Yes you can! The Microsoft Defender for IoT platform on-premises solution is dep The Microsoft Defender for IoT sensor connects to a SPAN port or network TAP and immediately begins collecting ICS network traffic via passive (agentless) monitoring. It has zero effect on OT networks since it isnΓÇÖt placed in the data path and doesnΓÇÖt actively scan OT devices. For example:+ - A single appliance (virtual of physical) can be in the Shop Floor DMZ layer, having all Shop Floor cell traffic routed to this layer. - Alternatively, locate small mini-sensors in each Shop Floor cell with either cloud or local management that will reside in the Shop Floor DMZ layer. Another appliance (virtual or physical) can monitor the traffic in the Shop Floor DMZ layer (for SCADA, Historian, or MES). Change network configuration settings before or after you activate your sensor u - **From the sensor UI**: [Update the sensor network configuration](how-to-manage-individual-sensors.md#update-the-sensor-network-configuration) - **From the sensor CLI**: [Network configuration](cli-ot-sensor.md#network-configuration) -For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md) and [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md) -+For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md), [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md), and [CLI command reference from OT network sensors](cli-ot-sensor.md). ## How do I check the sanity of my deployment For more information, see [Troubleshoot the sensor and on-premises management co ## Next steps -- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)+- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md) |
defender-for-iot | How To Activate And Set Up Your Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md | For more information about working with certificates, see [Manage certificates]( 1. Approve the terms and conditions. -1. Select **Activate**. The SSL/TLS certificate tab opens. Before defining certificates, see [About certificates](#about-certificates). +1. Select **Activate**. The SSL/TLS certificate tab opens. Before defining certificates, see [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md). It is **not recommended** to use a locally generated certificate in a production environment. |
defender-for-iot | How To Deploy Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md | Title: Setting SSL/TLS appliance certificates -description: Learn how to set up and deploy certificates for Defender for IoT. Previously updated : 02/06/2022+ Title: Deploy SSL/TLS certificates on OT appliances - Microsoft Defender for IoT. +description: Learn how to deploy SSL/TLS certificates on Microsoft Defender for IoT OT network sensors and on-premises management consoles. Last updated : 01/05/2023 -# Certificates for appliance encryption and authentication (OT appliances) +# Deploy SSL/TLS certificates on OT appliances -This article provides information needed when creating and deploying certificates for Microsoft Defender for IoT. A security, PKI or other qualified certificate lead should handle certificate creation and deployment. +This article describes how to create and deploy SSL/TLS certificates on OT network sensors and on-premises management consoles. Defender for IoT uses SSL/TLS certificates to secure communication between the following system components: -Defender for IoT uses SSL/TLS certificates to secure communication between the following system components: +- Between users and the OT sensor or on-premises management console UI access +- Between OT sensors and an on-premises management console, including [API communication](references-work-with-defender-for-iot-apis.md) +- Between an on-premises management console and a high availability (HA) server, if configured +- Between OT sensors or on-premises management consoles and partners servers defined in [alert forwarding rules](how-to-forward-alert-information-to-partners.md) -- Between users and the web console of the appliance.-- Between the sensors and an on-premises management console.-- Between a management console and a High Availability management console.-- To the REST API on the sensor and on-premises management console.+Some organizations also validate their certificates against a Certificate Revocation List (CRL) and the certificate expiration date, and the certificate trust chain. Invalid certificates can't be uploaded to OT sensors or on-premises management consoles, and will block encrypted communication between Defender for IoT components. -Defender for IoT Admin users can upload a certificate to sensor consoles and their on-premises management console from the SSL/TLS Certificates dialog box. +Each certificate authority (CA)-signed certificate must have both a `.key` file and a `.crt` file, which are uploaded to OT network sensors and on-premises management consoles after the first sign-in. While some organizations may also require a `.pem` file, a `.pem` file isn't required for Defender for IoT. +Make sure to create a unique certificate for each OT sensor, on-premises management console, and HA server, where each certificate meets required parameter criteria. -## About certificate generation methods +## Prerequisites -All certificate generation methods are supported using: +To perform the procedures described in this article, make sure that: -- Private and Enterprise Key Infrastructures (Private PKI). -- Public Key Infrastructures (Public PKI). -- Certificates locally generated on the appliance (locally self-signed). +- You have a security, PKI or certificate specialist available to oversee the certificate creation +- You can access the OT network sensor or on-premises management console as an **Admin** user. -> [!Important] -> It is not recommended to use locally self-signed certificates. This type of connection is not secure and should be used for test environments only. Since the owner of the certificate can't be validated and the security of your system can't be maintained, self-signed certificates should never be used for production networks. + For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -## About certificate validation +## Deploy an SSL/TLS certificate -In addition to securing communication between system components, users can also carry out certificate validation. +Deploy your SSL/TLS certificate by importing it to your OT sensor or on-premises management console. -Validation is evaluate |