Updates from: 10/11/2023 01:11:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Run the following steps in each domain and forest in your organization that cont
You can view and verify the newly created Microsoft Entra Kerberos server by using the following command: + ```powershell
-Get-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred -DomainCredential $domainCred
+ # When prompted to provide domain credentials use the userprincipalname format for the username instead of domain\username
+Get-AzureADKerberosServer -Domain $domain -UserPrincipalName $userPrincipalName -DomainCredential (get-credential)
``` This command outputs the properties of the Microsoft Entra Kerberos server. You can review the properties to verify that everything is in good order. > [!NOTE]
-> Running against another domain by supplying the credential will connect over NTLM, and then it fails. If the users are in the Protected Users security group in Active Directory, complete these steps to resolve the issue: Sign in as another domain user in **ADConnect** and donΓÇÖt supply "-domainCredential". The Kerberos ticket of the user that's currently signed in is used. You can confirm by executing `whoami /groups` to validate whether the user has the required permissions in Active Directory to execute the preceding command.
+> Running against another domain by supplying the credential in domain\username format will connect over NTLM, and then it fails. However, using the userprincipalname format for the domain administrator will ensure RPC bind to the DC is attempted using Kerberos correctly. If the users are in the Protected Users security group in Active Directory, complete these steps to resolve the issue: Sign in as another domain user in **ADConnect** and donΓÇÖt supply "-domainCredential". The Kerberos ticket of the user that's currently signed in is used. You can confirm by executing `whoami /groups` to validate whether the user has the required permissions in Active Directory to execute the preceding command.
| Property | Description | | | |
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Previously updated : 09/13/2023 Last updated : 10/09/2023
This document focuses on enabling security key based passwordless authentication
- WebAuthN requires Windows 10 version 1903 or higher To use security keys for logging in to web apps and services, you must have a browser that supports the WebAuthN protocol.
-These include Microsoft Edge, Chrome, Firefox, and Safari. For more information about, see [Browser support of FIDO2 passwordless authentication](fido2-compatibility.md).
+These include Microsoft Edge, Chrome, Firefox, and Safari. For more information, see [Browser support of FIDO2 passwordless authentication](fido2-compatibility.md).
## Prepare devices
-For Microsoft Entra joined devices, the best experience is on Windows 10 version 1903 or higher.
+For devices that are joined to Microsoft Entra ID, the best experience is on Windows 10 version 1903 or higher.
-Microsoft Entra hybrid joined devices must run Windows 10 version 2004 or higher.
+Hybrid-joined devices must run Windows 10 version 2004 or higher.
## Enable passwordless authentication method
There are some optional settings on the **Configure** tab to help manage how sec
![Screenshot of FIDO2 security key options](media/howto-authentication-passwordless-security-key/optional-settings.png) - **Allow self-service set up** should remain set to **Yes**. If set to no, your users won't be able to register a FIDO key through MySecurityInfo, even if enabled by Authentication Methods policy. -- **Enforce attestation** setting to **Yes** requires the FIDO security key metadata to be published and verified with the FIDO Alliance Metadata Service, and also pass MicrosoftΓÇÖs additional set of validation testing. For more information, see [What is a Microsoft-compatible security key?](concept-authentication-passwordless.md#fido2-security-key-providers)
+- **Enforce attestation** setting to **Yes** requires the FIDO security key metadata to be published and verified with the FIDO Alliance Metadata Service, and also pass Microsoft's additional set of validation testing. For more information, see [What is a Microsoft-compatible security key?](concept-authentication-passwordless.md#fido2-security-key-providers)
**Key Restriction Policy** -- **Enforce key restrictions** should be set to **Yes** only if your organization wants to only allow or disallow certain FIDO security keys, which are identified by their AAGuids. You can work with your security key provider to determine the AAGuids of their devices. If the key is already registered, AAGUID can also be found by viewing the authentication method details of the key per user.
+- **Enforce key restrictions** should be set to **Yes** only if your organization wants to only allow or disallow certain FIDO security keys, which are identified by their Authenticator Attestation GUID (AAGUID). You can work with your security key provider to determine the AAGUID of a device. If the key is already registered, you can find the AAGUID by viewing the authentication method details of the key for the user.
+ >[!WARNING]
+ >Key restrictions set the usability of specific FIDO2 methods for both registration and authentication. If you change key restrictions and remove an AAGUID that you previously allowed, users who previously registered an allowed method can no longer use it for sign-in.
## Disable a key
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
User actions are tasks that can be performed by a user. Currently, Conditional A
- **Register or join devices**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-device-registration.md) or [join](../devices/concept-directory-join.md) devices to Microsoft Entra ID. It provides granularity in configuring multifactor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action: - `Require multifactor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Microsoft Entra device registration or not applicable to Microsoft Entra device registration. - `Client apps`, `Filters for devices` and `Device state` conditions aren't available with this user action since they're dependent on Microsoft Entra device registration to enforce Conditional Access policies.
- - When a Conditional Access policy is enabled with this user action, you must set **Identity** > **Devices** > **Overview** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multifactor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/manage-device-identities.md#configure-device-settings).
+ - When a Conditional Access policy is enabled with this user action, you must set **Identity** > **Devices** > **Overview** > **Device Settings** - `Devices to be Microsoft Entra joined or Microsoft Entra registered require multifactor authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/manage-device-identities.md#configure-device-settings).
## Traffic forwarding profiles
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
Users must have at least the Security Reader role assigned and Log Analytics wor
### Stream sign-in logs from Microsoft Entra ID to Azure Monitor logs
-If you haven't integrated Microsoft Entra ID logs with Azure Monitor logs, you need to take the following steps before the workbook loads:
+If you haven't integrated Microsoft Entra logs with Azure Monitor logs, you need to take the following steps before the workbook loads:
1. [Create a Log Analytics workspace in Azure Monitor](../../azure-monitor/logs/quick-create-workspace.md).
-1. [Integrate Microsoft Entra ID logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+1. [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
## How it works
The insights and reporting dashboard lets you see the impact of one or more Cond
**Conditional Access policy**: Select one or more Conditional Access policies to view their combined impact. Policies are separated into two groups: Enabled and Report-only policies. By default, all Enabled policies are selected. These enabled policies are the policies currently enforced in your tenant.
-**Time range**: Select a time range from 4 hours to as far back as 90 days. If you select a time range further back than when you integrated the Microsoft Entra ID logs with Azure Monitor, only sign-ins after the time of integration appear.
+**Time range**: Select a time range from 4 hours to as far back as 90 days. If you select a time range further back than when you integrated the Microsoft Entra logs with Azure Monitor, only sign-ins after the time of integration appear.
**User**: By default, the dashboard shows the impact of the selected policies for all users. To filter by an individual user, type the name of the user into the text field. To filter by all users, type ΓÇ£All usersΓÇ¥ into the text field or leave the parameter empty.
View the breakdown of users or sign-ins for each of the conditions. You can filt
You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query displays the most frequent users. Selecting a user filters the query. > [!NOTE]
-> When downloading the Sign-ins logs, choose JSON format to include Conditional Access report-only result data.
+> When downloading the sign-in logs, choose JSON format to include Conditional Access report-only result data.
## Configure a Conditional Access policy in report-only mode
In order to access the workbook, you need the proper permissions in Microsoft En
![Screenshot showing how to troubleshoot failing queries.](./media/howto-conditional-access-insights-reporting/query-troubleshoot-sign-in-logs.png)
-For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra ID logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
### Why are the queries in the workbook failing?
You can edit and customize the workbook by going to **Identity** > **Monitoring
- [Conditional Access report-only mode](concept-conditional-access-report-only.md) -- For more information about Microsoft Entra workbooks, see the article, [How to use Azure Monitor workbooks for Microsoft Entra ID reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
+- For more information about Microsoft Entra workbooks, see the article, [How to use Azure Monitor workbooks for Microsoft Entra reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
- [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Administrators can monitor user sign-ins where continuous access evaluation (CAE
1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Apply the **Is CAE Token** filter.
-[ ![Screenshot showing how to add a filter to the Sign-ins log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox)
+[ ![Screenshot showing how to add a filter to the sign-in log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox)
From here, admins are presented with information about their userΓÇÖs sign-in events. Select any sign-in to see details about the session, like which Conditional Access policies applied and if CAE enabled.
The continuous access evaluation insights workbook allows administrators to view
### Accessing the CAE workbook template
-Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra ID logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). 1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
For more information about named locations, see the article [Using the location
## Next steps -- [Integrate Microsoft Entra ID logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+- [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- [Using the location condition](location-condition.md#named-locations) - [Continuous access evaluation](concept-continuous-access-evaluation.md)
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
If the user received a message with a More details link, they can collect most o
Once you've collected the information, See the following resources:
-* [Sign-in problems with Conditional Access](troubleshoot-conditional-access.md) ΓÇô Understand unexpected sign-in outcomes related to Conditional Access using error messages and Microsoft Entra sign-ins log.
+* [Sign-in problems with Conditional Access](troubleshoot-conditional-access.md) ΓÇô Understand unexpected sign-in outcomes related to Conditional Access using error messages and Microsoft Entra sign-in log.
* [Using the What-If tool](troubleshoot-conditional-access-what-if.md) - Understand why a policy was or wasn't applied to a user in a specific circumstance or if a policy would apply in a known state. ## Next Steps
active-directory Service Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/service-dependencies.md
The below table lists some more service dependencies, where the client apps must
## Troubleshooting service dependencies
-The Microsoft Entra sign-ins log is a valuable source of information when troubleshooting why and how a Conditional Access policy applied in your environment. For more information about troubleshooting unexpected sign-in outcomes related to Conditional Access, see the article [Troubleshooting sign-in problems with Conditional Access](troubleshoot-conditional-access.md#service-dependencies).
+The Microsoft Entra sign-in log is a valuable source of information when troubleshooting why and how a Conditional Access policy applied in your environment. For more information about troubleshooting unexpected sign-in outcomes related to Conditional Access, see the article [Troubleshooting sign-in problems with Conditional Access](troubleshoot-conditional-access.md#service-dependencies).
## Next steps
active-directory Troubleshoot Conditional Access What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access-what-if.md
The following additional information is optional but helps narrow the scope for
* Service principal risk (Preview) * Filter for devices
-This information can be gathered from the user, their device, or the Microsoft Entra sign-ins log.
+This information can be gathered from the user, their device, or the Microsoft Entra sign-in log.
## Generating results
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
For more information about programmatically updating your Conditional Access pol
## Next steps -- [What is Microsoft Entra ID monitoring?](../reports-monitoring/overview-monitoring.md)
+- [What is Microsoft Entra monitoring?](../reports-monitoring/overview-monitoring.md)
- [Install and use the log analytics views for Microsoft Entra ID](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md) - [Conditional Access: Programmatic access](howto-conditional-access-apis.md)
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
You may get the following errors when creating a service principal:
## Next steps
-* To set up a service principal with password, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps).
+* To set up a service principal with password, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps) or [Create an Azure service principal with Azure CLI](/cli/azure/azure-cli-sp-tutorial-2).
* For a more detailed explanation of applications and service principals, see [Application Objects and Service Principal Objects](app-objects-and-service-principals.md). * For more information about Microsoft Entra authentication, see [Authentication Scenarios for Microsoft Entra ID](./authentication-vs-authorization.md). * For information about working with app registrations by using **Microsoft Graph**, see the [Applications](/graph/api/resources/application) API reference.
active-directory Directory Delegated Administration Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delegated-administration-primer.md
When a Microsoft CSP creates a GDAP relationship request for your tenant a globa
* The roles that the partner needs to delegate to their technicians * The expiration date
-If you have GDAP relationships in your tenant, you will see a notification banner on the **Delegated Administration** page in the Microsoft Entra admin portal. Select the notification banner to see and manage GDAP relationships in the **Partners** page in Microsoft Admin Center.
+If you have GDAP relationships in your tenant, you will see a notification banner on the **Delegated Administration** page in the Microsoft Entra admin center. Select the notification banner to see and manage GDAP relationships in the **Partners** page in Microsoft Admin Center.
## Delegated admin permission
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Product state | Data | Access to data
## Delete a self-service sign-up product
-You can put a self-service sign-up product like Microsoft Power BI or Azure RMS into a **Delete** state to be immediately deleted in the Microsoft Entra admin portal:
+You can put a self-service sign-up product like Microsoft Power BI or Azure RMS into a **Delete** state to be immediately deleted in the Microsoft Entra admin center:
>[!NOTE] > If you're trying to delete the Contoso organization that has the initial default domain `contoso.onmicrosoft.com`, sign in with a UPN such as `admin@contoso.onmicrosoft.com`.
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
# Take over an unmanaged directory as administrator in Microsoft Entra ID
-This article describes two ways to take over a DNS domain name in an unmanaged directory in Microsoft Entra ID formerly known as Azure AD. When a self-service user signs up for a cloud service that uses Microsoft Entra ID, they're added to an unmanaged Microsoft Entra directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Microsoft Entra ID?](directory-self-service-signup.md)
+This article describes two ways to take over a DNS domain name in an unmanaged directory in Microsoft Entra ID. When a self-service user signs up for a cloud service that uses Microsoft Entra ID, they're added to an unmanaged Microsoft Entra directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Microsoft Entra ID?](directory-self-service-signup.md)
> [!VIDEO https://www.youtube.com/embed/GOSpjHtrRsg]
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Groups created in | Security group default behavior | Microsoft 365 group defaul
2. Select **All groups** > **Groups**, and then select **General** settings. > [!NOTE]
- > This setting only restricts access of group information in **My Groups**. It does not restrict access to group information via other methods like Microsoft Graph API calls or the Entra Admin Center
+ > This setting only restricts access of group information in **My Groups**. It does not restrict access to group information via other methods like Microsoft Graph API calls or the Microsoft Entra admin center.
![Microsoft Entra groups general settings.](./media/groups-self-service-management/groups-settings-general.png) > [!NOTE]
You can also use **Owners who can assign members as group owners in the Azure po
When users can create groups, all users in your organization are allowed to create new groups and then can, as the default owner, add members to these groups. You can't specify individuals who can create their own groups. You can specify individuals only for making another group member a group owner. > [!NOTE]
-> A Microsoft Entra ID P1 or P2 (P1 or P2) license is required for users to request to join a security group or Microsoft 365 group and for owners to approve or deny membership requests. Without a Microsoft Entra ID P1 or P2 license, users can still manage their groups in the MyApp Groups Access panel, but they can't create a group that requires owner approval and they can't request to join a group.
+> A Microsoft Entra ID P1 or P2 license is required for users to request to join a security group or Microsoft 365 group and for owners to approve or deny membership requests. Without a Microsoft Entra ID P1 or P2 license, users can still manage their groups in the MyApp Groups Access panel, but they can't create a group that requires owner approval and they can't request to join a group.
## Group settings
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
Once admins have taken the above steps, the user can't gain new tokens for any a
## Best practices -- Deploy an automated provisioning and deprovisioning solution. Deprovisioning users from applications is an effective way of revoking access, especially for applications that use sessions tokens. Develop a process to deprovision users to apps that don't support automatic provisioning and deprovisioning. Ensure applications revoke their own session tokens and stop accepting Microsoft Entra ID access tokens even if they're still valid.
+- Deploy an automated provisioning and deprovisioning solution. Deprovisioning users from applications is an effective way of revoking access, especially for applications that use sessions tokens. Develop a process to deprovision users to apps that don't support automatic provisioning and deprovisioning. Ensure applications revoke their own session tokens and stop accepting Microsoft Entra access tokens even if they're still valid.
- Use [Microsoft Entra SaaS App Provisioning](../app-provisioning/user-provisioning.md). Microsoft Entra SaaS App Provisioning typically runs automatically every 20-40 minutes. [Configure Microsoft Entra provisioning](../saas-apps/tutorial-list.md) to deprovision or deactivate disabled users in applications.
active-directory Tenant Restrictions V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md
Tenant restrictions v2 policies can't be directly enforced on non-Windows 10, Wi
### Migrate tenant restrictions v1 policies to v2
-Migration of Tenant Restrictions from V1 to V2 is an one time operation. Once you have moved from TRv1 to TRv2 on proxy, no client side changes are required and any policy changes need to just happen on the cloud via Entra portal.
+Migrating tenant restriction policies from v1 to v2 is a one-time operation. After migration, no client-side changes are required. You can make any subsequent policy changes via the Microsoft Entra admin center.
On your corporate proxy, you can move from tenant restrictions v1 to tenant restrictions v2 by changing this tenant restrictions v1 header:
active-directory Licensing Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/licensing-fundamentals.md
# Microsoft Entra ID Governance licensing fundamentals
-The following tables show the licensing requirements for Microsoft Entra ID Governance features
+The following tables show the licensing requirements for Microsoft Entra ID Governance features.
## Types of licenses
-The following licenses are available for use with Microsoft Entra ID Governance. The choice of licenses you need in a tenant depends on the features you're using in that tenant.
+The following licenses are available for use with Microsoft Entra ID Governance in the commercial cloud. The choice of licenses you need in a tenant depends on the features you're using in that tenant.
- **Free** - Included with Microsoft cloud subscriptions such as Microsoft Azure, Microsoft 365, and others.-- **Microsoft Entra ID P1** - Microsoft Entra ID P1 (becoming Microsoft Entra ID P1) is available as a standalone product or included with Microsoft 365 E3 for enterprise customers and Microsoft 365 Business Premium for small to medium businesses. -- **Microsoft Entra ID P2** - Microsoft Entra ID P2 (becoming Microsoft Entra ID P2) is available as a standalone product or included with Microsoft 365 E5 for enterprise customers.-- **Microsoft Entra ID Governance** - Microsoft Entra ID Governance is an advanced set of identity governance capabilities available for Microsoft Entra ID P1 and P2 customers, as two products **Microsoft Entra ID Governance** and **Microsoft Entra ID Governance Step Up for Microsoft Entra ID P2**.
+- **Microsoft Entra ID P1** - Microsoft Entra ID P1 is available as a standalone product or included with Microsoft 365 E3 for enterprise customers and Microsoft 365 Business Premium for small to medium businesses.
+- **Microsoft Entra ID P2** - Microsoft Entra ID P2 is available as a standalone product or included with Microsoft 365 E5 for enterprise customers.
+- **Microsoft Entra ID Governance** - Microsoft Entra ID Governance is an advanced set of identity governance capabilities available for Microsoft Entra ID P1 and P2 customers, as two products **Microsoft Entra ID Governance** and **Microsoft Entra ID Governance Step Up for Microsoft Entra ID P2**. These products contain the basic identity governance capabilities that were in Microsoft Entra ID P2, and additional advanced identity governance capabilities.
>[!NOTE] >Microsoft Entra ID Governance scenarios may depends upon other features that aren't covered by Microsoft Entra ID Governance. These features may have additional licensing requirements. See [Governance capabilities in other Microsoft Entra features](identity-governance-overview.md#governance-capabilities-in-other-microsoft-entra-features) for more information on governance scenarios that rely on additional features.
+Microsoft Entra ID Governance products are not yet available in the US government or US national clouds.
-### Prerequisites
+### Governance products and prerequisites
-The Microsoft Entra ID Governance capabilities are currently available in two products. These two products provide the same identity governance capabilities. The difference between the two products is that they have different prerequisites.
+The Microsoft Entra ID Governance capabilities are currently available in two products in the commercial cloud. These two products provide the same identity governance capabilities. The difference between the two products is that they have different prerequisites.
- A subscription to **Microsoft Entra ID Governance** requires that the tenant also have an active subscription to another product, one that contains the `AAD_PREMIUM` or `AAD_PREMIUM_P2` service plan. Examples of products meeting this prerequisite include **Microsoft Entra ID P1** or **Microsoft 365 E3**. - A subscription to **Microsoft Entra ID Governance Step Up for Microsoft Entra ID P2** requires that the tenant also have an active subscription to another product, one that contains the `AAD_PREMIUM_P2` service plan. Examples of products meeting this prerequisite include **Microsoft Entra ID P2** or **Microsoft 365 E5**.
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-troubleshoot.md
You can filter the view to focus on specific problems, such as dates. Double-cli
This information provides detailed steps and where the synchronization problem is occurring. In this way, you can pinpoint the exact spot of the problem.
+#### Microsoft Entra ID object deletion threshold
+
+If you have an implementation topology with Microsoft Entra Connect and Microsoft Entra Connect cloud sync, both exporting to the same Microsoft Entra ID Tenant, or if you completely moved from using Microsoft Entra Connect to Microsoft Entra Connect cloud sync, you might get the following export error message when you're deleting or moving multiple objects out of the defined scope:
+
+![Screenshot that shows the export error.](media/how-to-troubleshoot/log-4.png)
+
+This error isn't related to the [Microsoft Entra Connect Cloud Sync accidental deletions prevention feature](../cloud-sync/how-to-accidental-deletes.md). It's triggered by the [accidental deletion prevention feature](../connect/how-to-connect-sync-feature-prevent-accidental-deletes.md) set in the Microsoft Entra ID directory from Microsoft Entra Connect.
+If you don't have a Microsoft Entra Connect server installed from which you could toggle the feature, you can use the ["AADCloudSyncTools"](../cloud-sync/reference-powershell.md) PowerShell module installed with the Microsoft Entra Connect cloud sync agent to disable the setting on the tenant and allow the blocked deletions to export after confirming they are expected and should be allowed. Use the following command:
+
+```PowerShell
+Disable-AADCloudSyncToolsDirSyncAccidentalDeletionPrevention -tenantId "340ab039-c6b1-48a5-9ba7-28fe88f83980"
+```
+
+During the next provisioning cycle, the objects that were marked for deletion should be deleted from the Azure AD directory successfully.
+ ## Provisioning quarantined problems Cloud sync monitors the health of your configuration, and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error (for example, invalid admin credentials), the sync job is marked as in quarantine.
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
For a full list of Azure CLI identity commands, see [az identity](/cli/azure/ide
For information on how to assign a user-assigned managed identity to an Azure VM, see [Configure managed identities for Azure resources on an Azure VM using Azure CLI](qs-configure-cli-windows-vm.md#user-assigned-managed-identity).
-Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra ID protected resources without managing secrets.
+Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra protected resources without managing secrets.
::: zone-end
Remove-AzUserAssignedIdentity -ResourceGroupName <RESOURCE GROUP> -Name <USER AS
For a full list and more details of the Azure PowerShell managed identities for Azure resources commands, see [Az.ManagedServiceIdentity](/powershell/module/az.managedserviceidentity#managed_service_identity).
-Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra ID protected resources without managing secrets.
+Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra protected resources without managing secrets.
::: zone-end
To create a user-assigned managed identity, use the following template. Replace
To assign a user-assigned managed identity to an Azure VM using a Resource Manager template, see [Configure managed identities for Azure resources on an Azure VM using a template](qs-configure-template-windows-vm.md).
-Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra ID protected resources without managing secrets.
+Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra protected resources without managing secrets.
::: zone-end
For information on how to assign a user-assigned managed identity to an Azure VM
- [Configure managed identities for Azure resources on an Azure VM using REST API calls](qs-configure-rest-vm.md#user-assigned-managed-identity) - [Configure managed identities for Azure resources on a virtual machine scale set using REST API calls](qs-configure-rest-vmss.md#user-assigned-managed-identity)
-Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra ID protected resources without managing secrets.
+Learn how to use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra protected resources without managing secrets.
::: zone-end
active-directory How To View Managed Identity Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md
System-assigned identity:
* [Managed identities for Azure resources](./overview.md) * [Azure Activity log](../../azure-monitor/essentials/activity-log.md)
-* [Microsoft Entra sign-ins log](../reports-monitoring/concept-sign-ins.md)
+* [Microsoft Entra sign-in log](../reports-monitoring/concept-sign-ins.md)
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Service Fabric | [Using Managed identities for Azure with Service Fabric](../../service-fabric/concepts-managed-identity.md) | | Azure SignalR Service | [Managed identities for Azure SignalR Service](../../azure-signalr/howto-use-managed-identity.md) | | Azure Spring Apps | [Enable system-assigned managed identity for an application in Azure Spring Apps](../../spring-apps/how-to-enable-system-assigned-managed-identity.md) |
-| Azure SQL | [Managed identities in Microsoft Entra ID for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) |
-| Azure SQL Managed Instance | [Managed identities in Microsoft Entra ID for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) |
+| Azure SQL | [Managed identities in Microsoft Entra for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) |
+| Azure SQL Managed Instance | [Managed identities in Microsoft Entra for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) |
| Azure Stack Edge | [Manage Azure Stack Edge secrets using Azure Key Vault](../../databox-online/azure-stack-edge-gpu-activation-key-vault.md#recover-managed-identity-access) | Azure Static Web Apps | [Securing authentication secrets in Azure Key Vault](../../static-web-apps/key-vault-secrets.md) | Azure Stream Analytics | [Authenticate Stream Analytics to Azure Data Lake Storage Gen1 using managed identities](../../stream-analytics/stream-analytics-managed-identities-adls.md) |
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
Tokens should be treated like credentials. Don't expose them to users or other s
* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) * [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
-* Use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra ID protected resources without managing secrets
+* Use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra protected resources without managing secrets
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
Operations on managed identities can be performed by using an Azure Resource Man
* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) * [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
-* Use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra ID protected resources without managing secrets
+* Use [workload identity federation for managed identities](../workload-identities/workload-identity-federation.md) to access Microsoft Entra protected resources without managing secrets
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
You are likely trying to update an object that doesn't exist using `PATCH`.
## Next steps -- [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview)
+- [Microsoft Entra synchronization API overview](/graph/api/resources/synchronization-overview)
- [Tutorial: Develop and plan provisioning for a SCIM endpoint in Microsoft Entra ID](../app-provisioning/use-scim-to-provision-users-and-groups.md)
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
Regardless of the value you selected for **Scope** in the previous step, you can
1. In the source tenant, select **Provisioning** and expand the **Mappings** section.
-1. Select **Provision Azure Active Directory Users**.
+1. Select **Provision Microsoft Entra users**.
:::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-mappings.png" alt-text="Screenshot that shows the Provisioning page with the Mappings section expanded." lightbox="./media/cross-tenant-synchronization-configure/provisioning-mappings.png":::
Attribute mappings allow you to define how data should flow between the source t
1. In the source tenant, select **Provisioning** and expand the **Mappings** section.
-1. Select **Provision Azure Active Directory Users**.
+1. Select **Provision Microsoft Entra users**.
1. On the **Attribute Mapping** page, scroll down to review the user attributes that are synchronized between tenants in the **Attribute Mappings** section.
active-directory Azure Pim Resource Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md
# View activity and audit history for Azure resource roles in Privileged Identity Management
-Privileged Identity Management (PIM) in Microsoft Entra ID, enables you to view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Microsoft Entra admin center that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Microsoft Entra ID logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
+Privileged Identity Management (PIM) in Microsoft Entra ID, enables you to view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Microsoft Entra admin center that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Microsoft Entra logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
> [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here.
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
Microsoft Entra role-assignable group feature is not part of Microsoft Entra Pri
## Relationship between role-assignable groups and PIM for Groups
-Groups in Azure AD can be classified as either role-assignable or non-role-assignable. Additionally, any group can be enabled or not enabled for use with Azure AD Privileged Identity Management (PIM) for Groups. These are independent properties of the group. Any Microsoft Entra security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group doesn't have to be role-assignable group to be enabled in PIM for Groups.
+Groups in Microsoft Entra ID can be classified as either role-assignable or non-role-assignable. Additionally, any group can be enabled or not enabled for use with Microsoft Entra Privileged Identity Management (PIM) for Groups. These are independent properties of the group. Any Microsoft Entra security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group doesn't have to be role-assignable group to be enabled in PIM for Groups.
If you want to assign a Microsoft Entra role to a group, it has to be role-assignable. Even if you don't intend to assign a Microsoft Entra role to the group but the group provides access to sensitive resources, it is still recommended to consider creating the group as role-assignable. This is because of extra protections role-assignable groups have ΓÇô see [ΓÇ£What are Microsoft Entra role-assignable groups?ΓÇ¥](#what-are-entra-id-role-assignable-groups) in the section above.
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
# Approve activation requests for group members and owners
-With Privileged Identity Management (PIM) and Microsoft Entra ID (Previously known as Azure AD), you can configure activation of group membership and ownership to require approval. You can also choose users or groups from your Microsoft Entra organization as delegated approvers.
+With Privileged Identity Management (PIM) and Microsoft Entra ID, you can configure activation of group membership and ownership to require approval. You can also choose users or groups from your Microsoft Entra organization as delegated approvers.
We recommend that you select two or more approvers for each group. Delegated approvers have 24 hours to approve requests. If a request isn't approved within 24 hours, the eligible user must resubmit a new request. The 24-hour approval time window isn't configurable.
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
PIM enables you to allow a specific set of actions at a particular scope. Key fe
* Require **approval** to activate privileged roles
-* Enforce **multifactor authentication** to activate any role
+* Enforce **Multifactor authentication** to activate any role
* Use **justification** to understand why users activate
active-directory Pim Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-email-notifications.md
# Email notifications in PIM
-Privileged Identity Management (PIM) lets you know when important events occur in your Microsoft Entra ID (Previously known as Azure AD) organization, such as when a role is assigned or activated. Privileged Identity Management keeps you informed by sending you and other participants email notifications. These emails might also include links to relevant tasks, such activating or renewing a role. This article describes what these emails look like, when they are sent, and who receives them.
+Privileged Identity Management (PIM) lets you know when important events occur in your Microsoft Entra organization, such as when a role is assigned or activated. Privileged Identity Management keeps you informed by sending you and other participants email notifications. These emails might also include links to relevant tasks, such activating or renewing a role. This article describes what these emails look like, when they are sent, and who receives them.
>[!NOTE] >One event in Privileged Identity Management can generate email notifications to multiple recipients ΓÇô assignees, approvers, or administrators. The maximum number of notifications sent per one event is 1000. If the number of recipients exceeds 1000 ΓÇô only the first 1000 recipients will receive an email notification. This does not prevent other assignees, administrators, or approvers from using their permissions in Microsoft Entra ID and Privileged Identity Management.
active-directory Pim How To Use Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md
# View audit history for Microsoft Entra roles in Privileged Identity Management
-You can use the Microsoft Entra Privileged Identity Management (PIM) audit history to see all role assignments and activations within the past 30 days for all privileged roles. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Microsoft Entra ID logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). If you want to see the full audit history of activity in your organization in Microsoft Entra ID including administrator, end user, and synchronization activity, you can use the [Microsoft Entra security and activity reports](../reports-monitoring/overview-reports.md).
+You can use the Microsoft Entra Privileged Identity Management (PIM) audit history to see all role assignments and activations within the past 30 days for all privileged roles. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Microsoft Entra logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). If you want to see the full audit history of activity in your organization in Microsoft Entra ID including administrator, end user, and synchronization activity, you can use the [Microsoft Entra security and activity reports](../reports-monitoring/overview-reports.md).
Follow these steps to view the audit history for Microsoft Entra roles.
active-directory Kairos Business Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kairos-business-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Kairos Business
+description: Learn how to configure single sign-on between Microsoft Entra ID and Kairos Business.
++++++++ Last updated : 09/28/2023++++
+# Microsoft Entra SSO integration with Kairos Business
+
+In this tutorial, you'll learn how to integrate Kairos Business with Microsoft Entra ID. When you integrate Kairos Business with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Kairos Business.
+* Enable your users to be automatically signed-in to Kairos Business with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Kairos Business, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Kairos Business single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Kairos Business supports **IDP** initiated SSO.
+* Kairos Business supports **Just In Time** user provisioning.
+
+## Add Kairos Business from the gallery
+
+To configure the integration of Kairos Business into Microsoft Entra ID, you need to add Kairos Business from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Kairos Business** in the search box.
+1. Select **Kairos Business** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Kairos Business
+
+Configure and test Microsoft Entra SSO with Kairos Business using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Kairos Business.
+
+To configure and test Microsoft Entra SSO with Kairos Business, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Kairos Business SSO](#configure-kairos-business-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Kairos Business test user](#create-kairos-business-test-user)** - to have a counterpart of B.Simon in Kairos Business that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Kairos Business** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type one of the following value/pattern:
+
+ | **Identifier** |
+ ||
+ | `KairoBusiness`|
+ | `<KairoBusiness_ENTITY_ID>`|
+
+ > [!NOTE]
+ > <KairoBusiness_ENTITY_ID> is not real. Update this with the actual value.
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://www.dimepkairos.com.br/Dimep/Account/SamlLogon`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
+
+1. On the **Set up Kairos Business** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Kairos Business.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Kairos Business**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Kairos Business SSO
+
+To configure single sign-on on **Kairos Business** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Microsoft Entra admin center to [Kairos Business support team](mailto:dimep@dimep.com.br). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Kairos Business test user
+
+In this section, a user called Britta Simon is created in Kairos Business. Kairos Business supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Kairos Business, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the Kairos Business for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Kairos Business tile in the My Apps, you should be automatically signed in to the Kairos Business for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Kairos Business you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Senomix Timesheets Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/senomix-timesheets-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Senomix Timesheets
+description: Learn how to configure single sign-on between Microsoft Entra ID and Senomix Timesheets.
++++++++ Last updated : 10/06/2023++++
+# Microsoft Entra SSO integration with Senomix Timesheets
+
+In this tutorial, you'll learn how to integrate Senomix Timesheets with Microsoft Entra ID. When you integrate Senomix Timesheets with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Senomix Timesheets.
+* Enable your users to be automatically signed-in to Senomix Timesheets with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Senomix Timesheets, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Senomix Timesheets single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Senomix Timesheets supports both **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Senomix Timesheets from the gallery
+
+To configure the integration of Senomix Timesheets into Microsoft Entra ID, you need to add Senomix Timesheets from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Senomix Timesheets** in the search box.
+1. Select **Senomix Timesheets** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Senomix Timesheets
+
+Configure and test Microsoft Entra SSO with Senomix Timesheets using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Senomix Timesheets.
+
+To configure and test Microsoft Entra SSO with Senomix Timesheets, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Senomix Timesheets SSO](#configure-senomix-timesheets-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Senomix Timesheets test user](#create-senomix-timesheets-test-user)** - to have a counterpart of B.Simon in Senomix Timesheets that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Senomix Timesheets** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://timesheet.senomix.com/`
+
+ b. In the **Reply URL** text box, type the URL using the following pattern:
+ `https://www.senomix.com/simplesaml/module.php/saml/sp/saml2-acs.php/<CUSTOMER_AZURE_TENANT_ID>`
+
+ c. In the **Relay State** text box, type the URL using the following pattern:
+ `https://www.senomix.com/saml_sso/<CUSTOMER_AZURE_TENANT_ID>`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** text box, type one of the following URL/pattern:
+
+ | **Sign on URL** |
+ ||
+ |`https://www.senomix.com/timesheet`|
+ |`https://www.senomix.com/saml_sso/<CUSTOMER_AZURE_TENANT_ID>`|
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL, Relay State and Sign on URL. Contact [Senomix Timesheets support team](mailto:support@senomix.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Senomix Timesheets** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Senomix Timesheets.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Senomix Timesheets**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Senomix Timesheets SSO
+
+To configure single sign-on on **Senomix Timesheets** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [Senomix Timesheets support team](mailto:support@senomix.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Senomix Timesheets test user
+
+In this section, you create a user called B.Simon in Senomix Timesheets. Work with [Senomix Timesheets support team](mailto:support@senomix.com) to add the users in the Senomix Timesheets platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Senomix Timesheets Sign on URL where you can initiate the login flow.
+
+* Go to Senomix Timesheets Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Senomix Timesheets for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Senomix Timesheets tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Senomix Timesheets for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Senomix Timesheets you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Splashtop Secure Workspace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splashtop-secure-workspace-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Splashtop Secure Workspace
+description: Learn how to configure single sign-on between Microsoft Entra ID and Splashtop Secure Workspace.
++++++++ Last updated : 09/28/2023++++
+# Microsoft Entra SSO integration with Splashtop Secure Workspace
+
+In this tutorial, you'll learn how to integrate Splashtop Secure Workspace with Microsoft Entra ID. When you integrate Splashtop Secure Workspace with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Splashtop Secure Workspace.
+* Enable your users to be automatically signed-in to Splashtop Secure Workspace with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Splashtop Secure Workspace, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Splashtop Secure Workspace single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Splashtop Secure Workspace supports **SP** initiated SSO.
+* Splashtop Secure Workspace supports **Just In Time** user provisioning.
+
+## Add Splashtop Secure Workspace from the gallery
+
+To configure the integration of Splashtop Secure Workspace into Microsoft Entra ID, you need to add Splashtop Secure Workspace from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Splashtop Secure Workspace** in the search box.
+1. Select **Splashtop Secure Workspace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Splashtop Secure Workspace
+
+Configure and test Microsoft Entra SSO with Splashtop Secure Workspace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Splashtop Secure Workspace.
+
+To configure and test Microsoft Entra SSO with Splashtop Secure Workspace, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Splashtop Secure Workspace SSO](#configure-splashtop-secure-workspace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Splashtop Secure Workspace test user](#create-splashtop-secure-workspace-test-user)** - to have a counterpart of B.Simon in Splashtop Secure Workspace that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Splashtop Secure Workspace** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<ORG.ORG_NAME>.us.ssw.splashtop.com/realms/<ORG.ENTITY_ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<ORG.ORG_NAME>.us.ssw.splashtop.com/realms/<ORG.ORG_NAME>/broker/<ORG.ENTITY_ID>/endpoint`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<ORG.ORG_NAME>.us.ssw.splashtop.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Splashtop Secure Workspace support team](mailto:support-ssw@splashtop.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Splashtop Secure Workspace** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Splashtop Secure Workspace.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Splashtop Secure Workspace**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Splashtop Secure Workspace SSO
+
+To configure single sign-on on **Splashtop Secure Workspace** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [Splashtop Secure Workspace support team](mailto:support-ssw@splashtop.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Splashtop Secure Workspace test user
+
+In this section, a user called B.Simon is created in Splashtop Secure Workspace. Splashtop Secure Workspace supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Splashtop Secure Workspace, a new one is created when you attempt to access Splashtop Secure Workspace.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Splashtop Secure Workspace Sign-on URL where you can initiate the login flow.
+
+* Go to Splashtop Secure Workspace Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Splashtop Secure Workspace tile in the My Apps, this will redirect to Splashtop Secure Workspace Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Splashtop Secure Workspace you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Treasury Intelligence Solutions Tis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/treasury-intelligence-solutions-tis-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Treasury Intelligence Solutions (TIS)
+description: Learn how to configure single sign-on between Microsoft Entra ID and Treasury Intelligence Solutions (TIS).
++++++++ Last updated : 10/06/2023++++
+# Microsoft Entra SSO integration with Treasury Intelligence Solutions (TIS)
+
+In this tutorial, you'll learn how to integrate Treasury Intelligence Solutions (TIS) with Microsoft Entra ID. When you integrate Treasury Intelligence Solutions (TIS) with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Treasury Intelligence Solutions (TIS).
+* Enable your users to be automatically signed-in to Treasury Intelligence Solutions (TIS) with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Treasury Intelligence Solutions (TIS), you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Treasury Intelligence Solutions (TIS) single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Treasury Intelligence Solutions (TIS) supports both **SP and IDP** initiated SSO.
+
+## Add Treasury Intelligence Solutions (TIS) from the gallery
+
+To configure the integration of Treasury Intelligence Solutions (TIS) into Microsoft Entra ID, you need to add Treasury Intelligence Solutions (TIS) from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Treasury Intelligence Solutions (TIS)** in the search box.
+1. Select **Treasury Intelligence Solutions (TIS)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Treasury Intelligence Solutions (TIS)
+
+Configure and test Microsoft Entra SSO with Treasury Intelligence Solutions (TIS) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Treasury Intelligence Solutions (TIS).
+
+To configure and test Microsoft Entra SSO with Treasury Intelligence Solutions (TIS), perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Treasury Intelligence Solutions (TIS) SSO](#configure-treasury-intelligence-solutions-tis-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Treasury Intelligence Solutions (TIS) test user](#create-treasury-intelligence-solutions-tis-test-user)** - to have a counterpart of B.Simon in Treasury Intelligence Solutions (TIS) that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Treasury Intelligence Solutions (TIS)** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | Environment | URL |
+ |-|-|
+ | Production| `https://eu.tispayments.com` , `https://us.tispayments.com` |
+ | Staging | `https://eu-test.tispayments.com` , `https://us-test.tispayments.com` |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | Environment | URL |
+ |-|-|
+ | Production| `https://login.eu.tispayments.com/iam-server/SamlSsoLogin` , `https://login.us.tispayments.com/iam-server/SamlSsoLogin` |
+ | Staging | `https://login.eu-test.tispayments.com/iam-server/SamlSsoLogin` , `https://login.us-test.tispayments.com/iam-server/SamlSsoLogin` |
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type any one of the URLs:
+
+ | Environment | URL |
+ |-|-|
+ | Production| `https://login.eu.tispayments.com` , `https://login.us.tispayments.com` |
+ | Staging | `https://login.eu-test.tispayments.com` , `https://login.us-test.tispayments.com` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Certificate shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Treasury Intelligence Solutions (TIS).
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Treasury Intelligence Solutions (TIS)**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Treasury Intelligence Solutions (TIS) SSO
+
+To configure single sign-on on **Treasury Intelligence Solutions** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Microsoft Entra admin center to [Treasury Intelligence Solutions support team](mailto:support@tispayments.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Treasury Intelligence Solutions (TIS) test user
+
+In this section, you create a user called B.Simon in Treasury Intelligence Solutions (TIS). Work with [Treasury Intelligence Solutions (TIS) support team](mailto:support@tispayments.com) to add the users in the Treasury Intelligence Solutions (TIS) platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Treasury Intelligence Solution (TIS) Sign on URL where you can initiate the login flow.
+
+* Go to Treasury Intelligence Solution (TIS) Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Treasury Intelligence Solution (TIS) for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Treasury Intelligence Solution (TIS) tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Treasury Intelligence Solution for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Treasury Intelligence Solutions (TIS) you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Last updated 08/16/2022
-#Customer intent: As an Microsoft Entra Verified ID issuer, verifier or developer, I want to know what's new in the product so that I can make full use of the functionality as it becomes available.
+#Customer intent: As a Microsoft Entra Verified ID issuer, verifier or developer, I want to know what's new in the product so that I can make full use of the functionality as it becomes available.
Instructions for setting up place of work verification on LinkedIn available [he
## March 2023 - Admin API now supports [application access tokens](admin-api.md#authentication) and in addition to user bearer tokens.-- Introducing the Entra Verified ID [Services partner gallery](services-partners.md) listing trusted partners that can help accelerate your Entra Verified ID implementation.
+- Introducing the Microsoft Entra Verified ID [Services partner gallery](services-partners.md) listing trusted partners that can help accelerate your Microsoft Entra Verified ID implementation.
- Improvements to our Administrator onboarding experience in the [Admin portal](verifiable-credentials-configure-tenant.md#register-decentralized-id-and-verify-domain-ownership) based on customer feedback. - Updates to our samples in [github](https://github.com/Azure-Samples/active-directory-verifiable-credentials) showcasing how to dynamically display VC claims. ## February 2023 -- *Public preview* - Entitlement Management customers can now create access packages that leverage Entra Verified ID [learn more](../../active-directory/governance/entitlement-management-verified-id-settings.md)
+- *Public preview* - Entitlement Management customers can now create access packages that leverage Microsoft Entra Verified ID [learn more](../../active-directory/governance/entitlement-management-verified-id-settings.md)
- The Request Service API can now do revocation check for verifiable credentials presented that was issued with [StatusList2021](https://w3c.github.io/vc-status-list-2021/) or the [RevocationList2020](https://w3c-ccg.github.io/vc-status-rl-2020/) status list types.
Instructions for setting up place of work verification on LinkedIn available [he
## November 2022 -- Entra Verified ID now reports events in the [Azure AD Audit Log](../../active-directory/reports-monitoring/concept-audit-logs.md). Only management changes made via the Admin API are currently logged. Issuance or presentations of verifiable credentials aren't reported in the audit log. The log entries have a service name of `Verified ID` and the activity will be `Create authority`, `Update contract`, etc.
+- Microsoft Entra Verified ID now reports events in the [audit log](../../active-directory/reports-monitoring/concept-audit-logs.md). Only management changes made via the Admin API are currently logged. Issuance or presentations of verifiable credentials aren't reported in the audit log. The log entries have a service name of `Verified ID` and the activity will be `Create authority`, `Update contract`, etc.
## September 2022
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## June 2022 - We're adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).-- We're rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform:
+- We're rolling out several features to improve the overall experience of creating verifiable credentials in the Microsoft Entra Verified ID platform:
- Introducing Managed Credentials, which are verifiable credentials that no longer use Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).
- - Administrators can create a Verified Employee Managed Credential using the [new quick start](how-to-use-quickstart-verifiedemployee.md). The Verified Employee is a verifiable credential of type verifiedEmployee that is based on a predefined set of claims from your tenant's Azure Active Directory.
+ - Administrators can create a Verified Employee Managed Credential using the [new quick start](how-to-use-quickstart-verifiedemployee.md). The Verified Employee is a verifiable credential of type verifiedEmployee that is based on a predefined set of claims from your tenant's directory.
>[!IMPORTANT] > You need to migrate your Azure Storage based credentials to become Managed Credentials. We'll soon provide migration instructions.
Microsoft Entra Verified ID is now generally available (GA) as the new member of
- (new) [How to create verifiable credentials for ID token](how-to-use-quickstart-idtoken.md). - (new) [How to create verifiable credentials for self-asserted claims](how-to-use-quickstart-selfissued.md). - (new) [Rules and Display definition model specification](rules-and-display-definitions-model.md).
- - (new) [Creating an Azure AD tenant for development](how-to-create-a-free-developer-account.md).
+ - (new) [Creating a tenant for development](how-to-create-a-free-developer-account.md).
## May 2022
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Advisor uses machine-learning algorithms to identify low utilization and to iden
Advisor identifies resources that haven't been used at all over the last 7 days and makes a recommendation to shut them down. - Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we've found that **CPU** and **Outbound Network utilization** are sufficient.-- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations.
+- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After changing the lookback period, please be aware that it may take up to 48 hours for the recommendations to be updated.
- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances. - A shutdown recommendation is created if: - P95th of the maximum value of CPU utilization summed across all cores is less than 3%.
Advisor identifies resources that haven't been used at all over the last 7 days
Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates). On virtual machine scale sets, Advisor recommends resizing when it's possible to fit the current load on a more appropriate cheaper SKU, or a lower number of instances of the same SKU. - Recommendation criteria include **CPU**, **Memory** and **Outbound Network utilization**. -- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations.
+- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After changing the lookback period, please be aware that it may take up to 48 hours for the recommendations to be updated.
- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations. - An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria: - Performance of the workloads on the new SKU shouldn't be impacted.
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Azure Advisor helps you ensure and improve the continuity of your business-criti
1. On the **Advisor** dashboard, select the **Reliability** tab.
-## FarmBeats / Azure Data Manager for Agriculture (ADMA)
+## AI Services
-### Upgrade to the latest FarmBeats API version
-
-We have identified calls to a FarmBeats API version that is scheduled for deprecation. We recommend switching to the latest FarmBeats API version to ensure uninterrupted access to FarmBeats, latest features, and performance improvements.
-
-Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-
-### Upgrade to the latest ADMA Java SDK version
-
-We have identified calls to an Azure Data Manager for Agriculture (ADMA) Java SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-
-Learn more about [Azure FarmBeats - FarmBeatsJavaSdkVersion (Upgrade to the latest ADMA Java SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-
-### Upgrade to the latest ADMA DotNet SDK version
-
-We have identified calls to an ADMA DotNet SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-
-Learn more about [Azure FarmBeats - FarmBeatsDotNetSdkVersion (Upgrade to the latest ADMA DotNet SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-
-### Upgrade to the latest ADMA JavaScript SDK version
+### You are close to exceeding storage quota of 2GB. Create a Standard search service
-We have identified calls to an ADMA JavaScript SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
-Learn more about [Azure FarmBeats - FarmBeatsJavaScriptSdkVersion (Upgrade to the latest ADMA JavaScript SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
-### Upgrade to the latest ADMA Python SDK version
+### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service
-We have identified calls to an ADMA Python SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
+You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
-Learn more about [Azure FarmBeats - FarmBeatsPythonSdkVersion (Upgrade to the latest ADMA Python SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
-## API Management
+### You are close to exceeding your available storage quota. Add more partitions if you need more storage
-### SSL/TLS renegotiation blocked
+You're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
-SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it's blocked, reading 'context.Request.Certificate' in policy expressions returns 'null.' To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity)
-Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
+### Quota Exceeded for this resource
-### Hostname certificate rotation failed
+We have detected that the quota for your resource has been exceeded. You can wait for it to automatically get replenished soon, or to get unblocked and use the resource again now, you can upgrade it to a paid SKU.
-API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service can't retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+Learn more about [Cognitive Service - CognitiveServiceQuotaExceeded (Quota Exceeded for this resource)](/azure/cognitive-services/plan-manage-costs#pay-as-you-go).
-Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
+### Upgrade your application to use the latest API version from Azure OpenAI
-## App
+We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
-### Increase the minimal replica count for your container app
+Learn more about [Cognitive Service - CogSvcApiVersionOpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
-We detected the minimal replica count set for your container app may be lower than optimal. Consider increasing the minimal replica count for better availability.
+### Upgrade your application to use the latest API version from Azure OpenAI
-Learn more about [Microsoft App Container App - ContainerAppMinimalReplicaCountTooLow (Increase the minimal replica count for your container app)](https://aka.ms/containerappscalingrules).
+We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
-### Renew custom domain certificate
+Learn more about [Cognitive Service - API version: OpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
-We detected the custom domain certificate you uploaded is near expiration. Renew your certificate and upload the new certificate for your container apps.
-Learn more about [Microsoft App Container App - ContainerAppCustomDomainCertificateNearExpiration (Renew custom domain certificate)](https://aka.ms/containerappcustomdomaincert).
-### A potential networking issue has been identified with your Container Apps Environment that requires it to be re-created to avoid DNS issues
+## Analytics
-A potential networking issue has been identified for your Container Apps Environments. To prevent this potential networking issue from impacting your Container Apps Environment, create a new Container Apps Environment, re-create your Container Apps in the new environment, and delete the old Container Apps Environment
+### Your cluster running Ubuntu 16.04 is out of support
-Learn more about [Managed Environment - CreateNewContainerAppsEnvironment (A potential networking issue has been identified with your Container Apps Environment that requires it to be re-created to avoid DNS issues)](https://aka.ms/createcontainerapp).
+We detected that your HDInsight cluster still uses Ubuntu 16.04 LTS. End of support for Azure HDInsight clusters on Ubuntu 16.04 LTS began on November 30, 2022. Existing clusters run as is without support from Microsoft. Consider rebuilding your cluster with the latest images.
-### Domain verification required to renew your App Service Certificate
+Learn more about [HDInsight cluster - ubuntu1604HdiClusters (Your cluster running Ubuntu 16.04 is out of support)](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions).
-You have an App Service Certificate that's currently in a Pending Issuance status and requires domain verification. Failure to validate domain ownership results in an unsuccessful certificate issuance. Domain verification isn't automated for App Service Certificates and requires your action.
+### Upgrade your HDInsight cluster
-Learn more about [App Service Certificate - ASCDomainVerificationRequired (Domain verification required to renew your App Service Certificate)](https://aka.ms/ASCDomainVerificationRequired).
+We detected your cluster isn't using the latest image. We recommend customers to use the latest versions of HDInsight images as they bring in the best of open source updates, Azure updates and security fixes. HDInsight release happens every 30 to 60 days. Consider moving to the latest release.
-## Cache
+Learn more about [HDInsight cluster - upgradeHDInsightCluster (Upgrade your HDInsight Cluster)](/azure/hdinsight/hdinsight-release-notes).
-### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact
+### Your cluster was created one year ago
-Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
+We detected your cluster was created one year ago. As part of the best practices, we recommend you to use the latest HDInsight images as they bring in the best of open source updates, Azure updates and security fixes. The recommended maximum duration for cluster upgrades is less than six months.
-Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
+Learn more about [HDInsight cluster - clusterOlderThanAYear (Your cluster was created one year ago)](/azure/hdinsight/hdinsight-overview-before-you-start#keep-your-clusters-up-to-date).
-## CDN
+### Your Kafka cluster disks are almost full
-### Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate
+The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every topic, back up the files that are older and restart the brokers.
-We recommend configuring the Azure Front Door (AFD) customer certificate secret to ΓÇÿLatestΓÇÖ for the AFD to refer to the latest secret version in Azure Key Vault, so that the secret can be automatically rotated.
+Learn more about [HDInsight cluster - KafkaDiskSpaceFull (Your Kafka Cluster Disks are almost full)](https://aka.ms/kafka-troubleshoot-full-disk).
-Learn more about [Front Door Profile - SwitchVersionBYOC (Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate)](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types).
+### Creation of clusters under custom VNet requires more permission
-### Validate domain ownership by adding DNS TXT record to DNS provider.
+Your clusters with custom VNet were created without VNet joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023.
-Validate domain ownership by adding DNS TXT record to DNS provider.
+Learn more about [HDInsight cluster - EnforceVNetJoinPermissionCheck (Creation of clusters under custom VNet requires more permission)](https://aka.ms/hdinsightEnforceVnet).
-Learn more about [Front Door Profile - ValidateDomainOwnership (Validate domain ownership by adding DNS TXT record to DNS provider.)](/azure/frontdoor/standard-premium/how-to-add-custom-domain#domain-validation-state).
+### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-### Revalidate domain ownership for the Azure Front Door managed certificate renewal
+Starting July 1, 2020, you can't create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
-Azure Front Door can't automatically renew the managed certificate because the domain isn't CNAME mapped to AFD endpoint. Revalidate domain ownership for the managed certificate to be automatically renewed.
+Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka).
-Learn more about [Front Door Profile - RevalidateDomainOwnership (Revalidate domain ownership for the Azure Front Door managed certificate renewal)](/azure/frontdoor/standard-premium/how-to-add-custom-domain#domain-validation-state).
+### Deprecation of Older Spark Versions in HDInsight Spark cluster
-### Renew the expired Azure Front Door customer certificate to avoid service disruption
+Starting July 1, 2020, you can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters run as is without support from Microsoft.
-Some of the customer certificates for Azure Front Door Standard and Premium profiles expired. Renew the certificate in time to avoid service disruption.
+Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark).
-Learn more about [Front Door Profile - RenewExpiredBYOC (Renew the expired Azure Front Door customer certificate to avoid service disruption.)](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#use-your-own-certificate).
+### Enable critical updates to be applied to your HDInsight clusters
-### Cloud Services (classic) is retiring. Migrate off before 31 August 2024
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources such as Load balancer, Network interface and Public IP address, associated with your clusters before January 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 13, 2021 05:00 PM UTC and January 16, 2021 05:00 PM UTC. Failure to apply this update might result in your clusters becoming unhealthy and unusable.
-Cloud Services (classic) is retiring. Migrate off before 31 August 2024 to avoid any loss of data or business continuity.
+Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
-Learn more about [Resource - Cloud Services Retirement (Cloud Services (classic) is retiring. Migrate off before 31 August 2024)](https://aka.ms/ExternalRetirementEmailMay2022).
+### Drop and recreate your HDInsight clusters to apply critical updates
-## Cognitive Services
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters.
-### Quota Exceeded for this resource
+Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
-We have detected that the quota for your resource has been exceeded. You can wait for it to automatically get replenished soon, or to get unblocked and use the resource again now, you can upgrade it to a paid SKU.
+### Drop and recreate your HDInsight clusters to apply critical updates
-Learn more about [Cognitive Service - CognitiveServiceQuotaExceeded (Quota Exceeded for this resource)](/azure/cognitive-services/plan-manage-costs#pay-as-you-go).
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters. Drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable.
-### Upgrade your application to use the latest API version from Azure OpenAI
+Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
-We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
+### Apply critical updates to your HDInsight clusters
-Learn more about [Cognitive Service - CogSvcApiVersionOpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources such as load balancer, network interface and public IP address, associated with your clusters. Do this before January 21, 2021 05:00 PM UTC when the HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update might result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
-### Upgrade your application to use the latest API version from Azure OpenAI
+Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
-We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
+### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
-Learn more about [Cognitive Service - API version: OpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
+You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more information, see 'Learn More' link or contact us at askhdinsight@microsoft.com
-### Upgrade your application to use the latest API version from Azure OpenAI
+Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
-We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
-Learn more about [Cognitive Service - API version: OpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
## Compute
-### Enable Backups on your Virtual Machines
+### Cloud Services (classic) is retiring. Migrate off before 31 August 2024
-Enable backups for your virtual machines and secure your data
+Cloud Services (classic) is retiring. Migrate off before 31 August 2024 to avoid any loss of data or business continuity.
-Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your Virtual Machines)](../backup/backup-overview.md).
+Learn more about [Resource - Cloud Services Retirement (Cloud Services (classic) is retiring. Migrate off before 31 August 2024)](https://aka.ms/ExternalRetirementEmailMay2022).
### Upgrade the standard disks attached to your premium-capable VM to premium disks
-We have identified that you're using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+We have identified that you're using standard disks with your premium-capable virtual machines and we recommend you consider upgrading the standard disks to premium disks. For any single instance virtual machine using premium storage for all operating system disks and data disks, we guarantee virtual machine connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServi
Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
-Learn more about [Virtual machine scale set - Rhui3ToRhui4MigrationVMSS (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
+Learn more about [Virtual machine - Rhui3ToRhui4MigrationV2 (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
-### Update your firewall configurations to allow new RHUI 4 IPs
+### Virtual machines in your subscription are running on images that have been scheduled for deprecation
-Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
+Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to a newer version of the image to prevent disruption to your workloads.
-Learn more about [Virtual machine - Rhui3ToRhui4MigrationV2 (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
+Learn more about [Virtual machine - VMRunningDeprecatedOfferLevelImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual machines in your subscription are running on images that have been scheduled for deprecation
+
+Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to a newer SKU of the image to prevent disruption to your workloads.
+
+Learn more about [Virtual machine - VMRunningDeprecatedPlanLevelImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual machines in your subscription are running on images that have been scheduled for deprecation
+
+Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer version of the image to prevent disruption to your workloads.
-### Virtual Machines in your subscription are running on images that have been scheduled for deprecation
+Learn more about [Virtual machine - VMRunningDeprecatedImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
-Virtual Machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer Offer of the image to prevent disruption to your workloads.
+### Use Availability zones for better resiliency and availability
+
+Availability Zones (AZ) in Azure help protect your applications and data from datacenter failures. Each AZ is made up of one or more datacenters equipped with independent power, cooling, and networking. By designing solutions to use zonal VMs, you can isolate your VMs from failure in any other zone.
-Learn more about [Virtual machine - VMRunningDeprecatedOfferLevelImage (Virtual Machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+Learn more about [Virtual machine - AvailabilityZoneVM (Use Availability zones for better resiliency and availability)](/azure/reliability/availability-zones-overview).
-### Virtual Machines in your subscription are running on images that have been scheduled for deprecation
+### Access to mandatory URLs missing for your Azure Virtual Desktop environment
-Virtual Machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer SKU of the image to prevent disruption to your workloads.
+In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you might also search your application event log for event 3702.
-Learn more about [Virtual machine - VMRunningDeprecatedPlanLevelImage (Virtual Machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
-### Virtual Machines in your subscription are running on images that have been scheduled for deprecation
+### Update your firewall configurations to allow new RHUI 4 IPs
-Virtual Machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer version of the image to prevent disruption to your workloads.
+Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
-Learn more about [Virtual machine - VMRunningDeprecatedImage (Virtual Machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+Learn more about [Virtual machine scale set - Rhui3ToRhui4MigrationVMSS (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation
-Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer Offer of the image to prevent disruption to your workload.
+Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to a newer version of the image to prevent disruption to your workload.
Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedOfferImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedImage (
### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation
-Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer Plan of the image to prevent disruption to your workload.
+Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer plan of the image to prevent disruption to your workload.
Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedPlanImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
-### Use Availability zones for better resiliency and availability
-
-Availability Zones (AZ) in Azure help protect your applications and data from datacenter failures. Each AZ is made up of one or more datacenters equipped with independent power, cooling, and networking. By designing solutions to use zonal VMs, you can isolate your VMs from failure in any other zone.
-
-Learn more about [Virtual machine - AvailabilityZoneVM (Use Availability zones for better resiliency and availability)](/azure/reliability/availability-zones-overview).
-
-### Use Managed Disks to improve data reliability
-
-Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
-Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
-### Check Point Virtual Machine may lose Network Connectivity
+## Containers
-We have identified that your Virtual Machine may be running a version of Check Point image that might lose network connectivity during a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
+### Increase the minimal replica count for your container app
-Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point Virtual Machine may lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
+We detected the minimal replica count set for your container app might be lower than optimal. Consider increasing the minimal replica count for better availability.
-### Use Managed Disks to improve data reliability
+Learn more about [Microsoft App Container App - ContainerAppMinimalReplicaCountTooLow (Increase the minimal replica count for your container app)](https://aka.ms/containerappscalingrules).
-Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
+### Renew custom domain certificate
-Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
+We detected the custom domain certificate you uploaded is near expiration. Renew your certificate and upload the new certificate for your container apps.
-### Access to mandatory URLs missing for your Azure Virtual Desktop environment
+Learn more about [Microsoft App Container App - ContainerAppCustomDomainCertificateNearExpiration (Renew custom domain certificate)](https://aka.ms/containerappcustomdomaincert).
-In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
+### A potential networking issue has been identified with your Container Apps environment that requires it to be re-created to avoid DNS issues
-Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
+A potential networking issue has been identified for your Container Apps environments. To prevent this potential networking issue, create a new Container Apps environment, re-create your Container Apps in the new environment, and delete the old Container Apps environment
-### Clusters having node pools using non-recommended B-Series
+Learn more about [Managed Environment - CreateNewContainerAppsEnvironment (A potential networking issue has been identified with your Container Apps Environment that requires it to be re-created to avoid DNS issues)](https://aka.ms/createcontainerapp).
-Cluster has one or more node pools using a non-recommended burstable VM SKU. With burstable VMs, full vCPU capability 100% is unguaranteed. Make sure B-series VMs are not used in a Production environment.
+### Domain verification required to renew your App Service Certificate
-Learn more about [Kubernetes service - ClustersUsingBSeriesVMs (Clusters having node pools using non-recommended B-Series)](/azure/virtual-machines/sizes-b-series-burstable).
+You have an App Service certificate that's currently in a Pending Issuance status and requires domain verification. Failure to validate domain ownership results in an unsuccessful certificate issuance. Domain verification isn't automated for App Service certificates and requires your action.
-## MySQL
+Learn more about [App Service Certificate - ASCDomainVerificationRequired (Domain verification required to renew your App Service Certificate)](https://aka.ms/ASCDomainVerificationRequired).
-### Replication - Add a primary key to the table that currently does not have one
+### Clusters having node pools using unrecommended B-Series
-Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica can synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server and then recreate the replica server.
+Cluster has one or more node pools using an unrecommended burstable VM SKU. With burstable VMs, full vCPU capability 100% is unguaranteed. Make sure B-series VMs aren't used in a Production environment.
-Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently does not have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
+Learn more about [Kubernetes service - ClustersUsingBSeriesVMs (Clusters having node pools using unrecommended B-Series)](/azure/virtual-machines/sizes-b-series-burstable).
-### High Availability - Add primary key to the table that currently does not have one
+### Upgrade to Standard tier for mission-critical and production clusters
-Our internal monitoring system has identified significant replication lag on the High Availability standby server. The standby server replaying relay logs on a table that lacks a primary key, is the main cause of the lag. To address this issue and adhere to best practices, we recommend you add primary keys to all tables. Once you add the primary keys, proceed to disable and then re-enable High Availability to mitigate the problem.
+This cluster has more than 10 nodes and hasn't enabled the Standard tier. The Kubernetes Control Plane on the Free tier comes with limited resources and isn't intended for production use or any cluster with 10 or more nodes.
-Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently does not have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
+Learn more about [Kubernetes service - UseStandardpricingtier (Upgrade to Standard tier for mission-critical and production clusters)](/azure/aks/uptime-sla).
-## PostgreSQL
+### Pod Disruption Budgets Recommended
-### Improve PostgreSQL availability by removing inactive logical replication slots
+Pod Disruption budgets recommended. Improve service high availability.
-Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](../aks/operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets).
-Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding).
+### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes
-### Improve PostgreSQL availability by removing inactive logical replication slots
+Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality.
-Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs).
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
-## IoT Hub
-### Upgrade device client SDK to a supported version for IotHub
+## Databases
-Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
+### Replication - Add a primary key to the table that currently does not have one
-Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica can synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server. Once the primary keys are added, recreate the replica server.
-### IoT Hub Potential Device Storm Detected
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently doesn't have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
-A device storm is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected.
+### High Availability - Add primary key to the table that currently does not have one
-Learn more about [IoT hub - IoTHubDeviceStorm (IoT Hub Potential Device Storm Detected)](https://aka.ms/IotHubDeviceStorm).
+Our internal monitoring system has identified significant replication lag on the High Availability standby server. The standby server replaying relay logs on a table that lacks a primary key, is the main cause of the lag. To address this issue and adhere to best practices, we recommend you add primary keys to all tables. Once you add the primary keys, proceed to disable and then re-enable High Availability to mitigate the problem.
-### Upgrade Device Update for IoT Hub SDK to a supported version
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently doesn't have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
-Your Device Update for IoT Hub Instance is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+### Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact
-Learn more about [IoT hub - DU_SDK_Advisor_Recommendation (Upgrade Device Update for IoT Hub SDK to a supported version)](/azure/iot-hub-device-update/understand-device-update).
+Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
-### IoT Hub Quota Exceeded Detected
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
-We have detected that your IoT Hub has exceeded its daily message quota. To prevent this in the future, add units or increase the SKU level.
+### Enable Azure backup for SQL on your virtual machines
-Learn more about [IoT hub - IoTHubQuotaExceededAdvisor (IoT Hub Quota Exceeded Detected)](/azure/iot-hub/troubleshoot-error-codes#403002-iothubquotaexceeded).
+Enable backups for SQL databases on your virtual machines using Azure backup and realize the benefits of zero-infrastructure backup, point-in-time restore, and central management with SQL AG integration.
-### Upgrade device client SDK to a supported version for IotHub
+Learn more about [SQL virtual machine - EnableAzBackupForSQL (Enable Azure backup for SQL on your virtual machines)](/azure/backup/backup-azure-sql-database).
-Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
+### Improve PostgreSQL availability by removing inactive logical replication slots
-Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+Our internal telemetry indicates that your PostgreSQL server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
-### Upgrade Edge Device Runtime to a supported version for Iot Hub
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding).
-Some or all of your Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the recommendation.
+### Improve PostgreSQL availability by removing inactive logical replication slots
-Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck).
+Our internal telemetry indicates that your PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
-## Azure Cosmos DB
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
### Configure Consistent indexing mode on your Azure Cosmos DB container
-We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which may impact the freshness of query results. We recommend switching to Consistent mode.
+We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which might impact the freshness of query results. We recommend switching to Consistent mode.
Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos DB container)](/azure/cosmos-db/how-to-manage-indexing-policy).
Learn more about [Azure Cosmos DB account - CosmosDBKeyVaultWrap (Your Azure Cos
### Avoid being rate limited from metadata operations
-We found a high number of metadata operations on your account. Your data in Azure Cosmos DB, including metadata about your databases and collections, is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. A high number of metadata operations can cause rate limiting. Avoid this by using static Azure Cosmos DB client instances in your code, and caching the names of databases and collections.
+We found a high number of metadata operations on your account. Your data in Azure Cosmos DB, including metadata about your databases and collections, is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. A high number of metadata operations can cause rate limiting. Avoid rate limiting by using static Azure Cosmos DB client instances in your code, and caching the names of databases and collections.
Learn more about [Azure Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips).
Learn more about [Azure Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use
### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated
-There is a critical bug in version 2.6.13 and lower, of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: There is a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
+There's a critical bug in version 2.6.13 and lower, of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: There's a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md). ### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue
-There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
+There's a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](../cosmos-db/sql/sql-api-sdk-java-v4.md).
-## Fluid Relay
-### Upgrade your Azure Fluid Relay client library
-You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality and enhancements in performance and stability. For more information on the latest version to use and how to upgrade, see the following article.
+## Integration
-Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
+### Upgrade to the latest FarmBeats API version
-## HDInsight
+We have identified calls to a FarmBeats API version that is scheduled for deprecation. We recommend switching to the latest FarmBeats API version to ensure uninterrupted access to FarmBeats, latest features, and performance improvements.
-### Your cluster running Ubuntu 16.04 is out of support
+Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-We detected that your HDInsight cluster still uses Ubuntu 16.04 LTS. End of support for Azure HDInsight clusters on Ubuntu 16.04 LTS began on November 30, 2022. Existing clusters run as is without support from Microsoft. Consider rebuilding your cluster with the latest images.
+### Upgrade to the latest ADMA Java SDK version
-Learn more about [HDInsight cluster - ubuntu1604HdiClusters (Your cluster running Ubuntu 16.04 is out of support)](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions).
+We have identified calls to an Azure Data Manager for Agriculture (ADMA) Java SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-### Upgrade your HDInsight Cluster
+Learn more about [Azure FarmBeats - FarmBeatsJavaSdkVersion (Upgrade to the latest ADMA Java SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-We detected your cluster is not using the latest image. We recommend customers to use the latest versions of HDInsight Images as they bring in the best of open source updates, Azure updates and security fixes. HDInsight release happens every 30 to 60 days. Consider moving to the latest release.
+### Upgrade to the latest ADMA DotNet SDK version
-Learn more about [HDInsight cluster - upgradeHDInsightCluster (Upgrade your HDInsight Cluster)](/azure/hdinsight/hdinsight-release-notes).
+We have identified calls to an ADMA DotNet SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-### Your cluster was created one year ago
+Learn more about [Azure FarmBeats - FarmBeatsDotNetSdkVersion (Upgrade to the latest ADMA DotNet SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-We detected your cluster was created 1 year ago. As part of the best practices, we recommend you to use the latest HDInsight images as they bring in the best of open source updates, Azure updates and security fixes. The recommended maximum duration for cluster upgrades is less than six months.
+### Upgrade to the latest ADMA JavaScript SDK version
-Learn more about [HDInsight cluster - clusterOlderThanAYear (Your cluster was created one year ago)](/azure/hdinsight/hdinsight-overview-before-you-start#keep-your-clusters-up-to-date).
+We have identified calls to an ADMA JavaScript SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-### Your Kafka Cluster Disks are almost full
+Learn more about [Azure FarmBeats - FarmBeatsJavaScriptSdkVersion (Upgrade to the latest ADMA JavaScript SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every topic, back up the files that are older and restart the brokers.
+### Upgrade to the latest ADMA Python SDK version
-Learn more about [HDInsight cluster - KafkaDiskSpaceFull (Your Kafka Cluster Disks are almost full)](https://aka.ms/kafka-troubleshoot-full-disk).
+We have identified calls to an ADMA Python SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements.
-### Creation of clusters under custom VNet requires more permission
+Learn more about [Azure FarmBeats - FarmBeatsPythonSdkVersion (Upgrade to the latest ADMA Python SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
-Your clusters with custom VNet were created without VNet joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023.
+### SSL/TLS renegotiation blocked
-Learn more about [HDInsight cluster - EnforceVNetJoinPermissionCheck (Creation of clusters under custom VNet requires more permission)](https://aka.ms/hdinsightEnforceVnet).
+SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it's blocked, reading 'context.Request.Certificate' in policy expressions returns 'null.' To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
-### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
+Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
-Starting July 1, 2020, you can't create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+### Hostname certificate rotation failed
-Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka).
+API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service can't retrieve certificate updates from Key Vault, which might lead to the service using stale certificate and runtime API traffic being blocked as a result.
-### Deprecation of Older Spark Versions in HDInsight Spark cluster
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
-Starting July 1, 2020, you can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters run as is without support from Microsoft.
-Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark).
-### Enable critical updates to be applied to your HDInsight clusters
+## Internet of Things
-HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources such as Load balancer, Network interface and Public IP address, associated with your clusters before January 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 13, 2021 05:00 PM UTC and January 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+### Upgrade device client SDK to a supported version for IotHub
-Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
+Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
-### Drop and recreate your HDInsight clusters to apply critical updates
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters.
+### IoT Hub Potential Device Storm Detected
-Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
+A device storm is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected.
-### Drop and recreate your HDInsight clusters to apply critical updates
+Learn more about [IoT hub - IoTHubDeviceStorm (IoT Hub Potential Device Storm Detected)](https://aka.ms/IotHubDeviceStorm).
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters. Drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable.
+### Upgrade Device Update for IoT Hub SDK to a supported version
-Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
+Your Device Update for IoT Hub Instance is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-### Apply critical updates to your HDInsight clusters
+Learn more about [IoT hub - DU_SDK_Advisor_Recommendation (Upgrade Device Update for IoT Hub SDK to a supported version)](/azure/iot-hub-device-update/understand-device-update).
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources such as Load balancer, Network interface and Public IP address, associated with your clusters before January 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
+### IoT Hub Quota Exceeded Detected
-Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
+We have detected that your IoT Hub has exceeded its daily message quota. To prevent your IoT Hub exceeding its daily message quota in the future, add units or increase the SKU level.
-### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
+Learn more about [IoT hub - IoTHubQuotaExceededAdvisor (IoT Hub Quota Exceeded Detected)](/azure/iot-hub/troubleshoot-error-codes#403002-iothubquotaexceeded).
-You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more information, see 'Learn More' link or contact us at askhdinsight@microsoft.com
+### Upgrade device client SDK to a supported version for Iot Hub
-Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
+Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the link given.
-## Hybrid Compute
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
-### Upgrade to the latest version of the Azure Connected Machine agent
+### Upgrade Edge Device Runtime to a supported version for Iot Hub
-The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. Upgrade your agent to the latest version for the best Azure Arc experience.
+Some or all of your Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the link given.
-Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](../azure-arc/servers/manage-agent.md).
+Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck).
-## Kubernetes
-### Upgrade to Standard tier for mission-critical and production clusters
-This cluster has more than 10 nodes and has not enabled the Standard tier. The Kubernetes Control Plane on the Free tier comes with limited resources and is not intended for production use or any cluster with 10 or more nodes.
+## Media
-Learn more about [Kubernetes service - UseStandardpricingtier (Upgrade to Standard tier for mission-critical and production clusters)](/azure/aks/uptime-sla).
+### Increase Media Services quotas or limits to ensure continuity of service
-### Pod Disruption Budgets Recommended
+Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create extra Azure Media accounts in an attempt to obtain higher limits.
-Pod Disruption Budgets Recommended. Improve service high availability.
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
-Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](../aks/operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets).
-### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes
-Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality.
+## Networking
-Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs).
+### Check Point virtual machine might lose Network Connectivity
-## Media Services
+We have identified that your virtual machine might be running a version of Check Point image that might lose network connectivity during a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
-### Increase Media Services quotas or limits to ensure continuity of service
+Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point virtual machine might lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
-Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create extra Azure Media accounts in an attempt to obtain higher limits.
+### Upgrade to the latest version of the Azure Connected Machine agent
-Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
+The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. Upgrade your agent to the latest version for the best Azure Arc experience.
-## Azure NetApp Files
+Learn more about [Connected Machine agent - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](../azure-arc/servers/manage-agent.md).
-### Implement disaster recovery strategies for your Azure NetApp Files Resources
+### Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate
-To avoid data or functionality loss in the event of a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes
+We recommend configuring the Azure Front Door (AFD) customer certificate secret to ΓÇÿLatestΓÇÖ for the AFD to refer to the latest secret version in Azure Key Vault, so that the secret can be automatically rotated.
-Learn more about [Volume - ANFCRRCZRRecommendation (Implement disaster recovery strategies for your Azure NetApp Files Resources)](https://aka.ms/anfcrr).
+Learn more about [Front Door Profile - SwitchVersionBYOC (Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate)](https://aka.ms/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types).
-### Azure NetApp Files Enable Continuous Availability for SMB Volumes
+### Validate domain ownership by adding DNS TXT record to DNS provider.
-Recommendation to enable SMB volume for Continuous Availability.
+Validate domain ownership by adding DNS TXT record to DNS provider.
-Learn more about [Volume - anfcaenablement (Azure NetApp Files Enable Continuous Availability for SMB Volumes)](https://aka.ms/anfdoc-continuous-availability).
+Learn more about [Front Door Profile - ValidateDomainOwnership (Validate domain ownership by adding DNS TXT record to DNS provider.)](https://aka.ms/how-to-add-custom-domain#domain-validation-state).
-## Networking
+### Revalidate domain ownership for the Azure Front Door managed certificate renewal
+
+Azure Front Door can't automatically renew the managed certificate because the domain isn't CNAME mapped to AFD endpoint. Revalidate domain ownership for the managed certificate to be automatically renewed.
+
+Learn more about [Front Door Profile - RevalidateDomainOwnership (Revalidate domain ownership for the Azure Front Door managed certificate renewal)](https://aka.ms/how-to-add-custom-domain#domain-validation-state).
+
+### Renew the expired Azure Front Door customer certificate to avoid service disruption
+
+Some of the customer certificates for Azure Front Door Standard and Premium profiles expired. Renew the certificate in time to avoid service disruption.
+
+Learn more about [Front Door Profile - RenewExpiredBYOC (Renew the expired Azure Front Door customer certificate to avoid service disruption.)](https://aka.ms/how-to-configure-https-custom-domain#use-your-own-certificate).
### Upgrade your SKU or add more instances to ensure fault tolerance
Deploying two or more medium or large sized instances ensures business continuit
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more instances to ensure fault tolerance)](https://aka.ms/aa_gatewayrec_learnmore).
-### Move to production gateway SKUs from Basic gateways
+### Avoid hostname override to ensure site integrity
-The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you're using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
+Try to avoid overriding the hostname when configuring Application Gateway. Having a domain on the frontend of Application Gateway different than the one used to access the backend, can potentially lead to cookies or redirect URLs being broken. A different frontend domain isn't a problem in all situations, and certain categories of backends like REST APIs, are less sensitive in general. Make sure the backend is able to deal with the domain difference, or update the Application Gateway configuration so the hostname doesn't need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
-Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore).
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
-### Add at least one more endpoint to the profile, preferably in another Azure region
+### Azure WAF RuleSet CRS 3.1/3.2 has been updated with Log4j 2 vulnerability rule
-Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. We also recommend that endpoints be in different regions.
+In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide extra protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable them.
-Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4).
+Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
-### Add an endpoint configured to "All (World)"
+### Extra protection to mitigate Log4j 2 vulnerability (CVE-2021-44228)
-For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there's no predefined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available.
+To mitigate the impact of Log4j 2 vulnerability, we recommend these steps:
-Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5).
+1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link provided.
+2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU.
-### Add or move one endpoint to another Azure region
+Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
-All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region improves overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
+### Update VNet permission of Application Gateway users
-Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
+To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission.
+
+Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin).
+
+### Use version-less Key Vault secret identifier to reference the certificates
+
+We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/
+
+Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion).
### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency
Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Imple
### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit
-We have detected that ExpressRoute Monitor on Network Performance Monitor isn't currently monitoring your ExpressRoute circuit. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
+We have detected that ExpressRoute Monitor on Network Performance Monitor isn't currently monitoring your ExpressRoute circuit. ExpressRoute monitor provides end-to-end monitoring capabilities including: loss, latency, and performance from on-premises to Azure and Azure to on-premises
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](../expressroute/how-to-npm.md).
-### Avoid hostname override to ensure site integrity
-
-Try to avoid overriding the hostname when configuring Application Gateway. Having a domain on the frontend of Application Gateway different than the one used to access the backend, can potentially lead to cookies or redirect URLs being broken. This might not be the case in all situations, and certain categories of backends, like REST APIs, are less sensitive in general. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
-
-Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
- ### Use ExpressRoute Global Reach to improve your design for disaster recovery You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments if one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros. Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](../expressroute/about-upgrade-circuit-bandwidth.md).
-### Azure WAF RuleSet CRS 3.1/3.2 has been updated with Log4j 2 vulnerability rule
+### Add at least one more endpoint to the profile, preferably in another Azure region
-In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide extra protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable them.
+Profiles require more than one endpoint to ensure availability if one of the endpoints fails. We also recommend that endpoints be in different regions.
-Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
+Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4).
-### Additional protection to mitigate Log4j 2 vulnerability (CVE-2021-44228)
+### Add an endpoint configured to "All (World)"
-To mitigate the impact of Log4j 2 vulnerability, we recommend these steps:
+For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there's no predefined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available.
-1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link provided.
-2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU.
+Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5).
-Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
+### Add or move one endpoint to another Azure region
-### Use NAT gateway for outbound connectivity
+All endpoints associated to this proximity profile are in the same region. Users from other regions might experience long latency when attempting to connect. Adding or moving an endpoint to another region improves overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
-Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT gateway for outbound traffic from your virtual networks. NAT gateway scales dynamically and provides secure connections for traffic headed to the internet.
+Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
-Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet).
-### Update VNet permission of Application Gateway users
+### Move to production gateway SKUs from Basic gateways
-To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission.
+The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you're using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
-Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin).
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore).
-### Use version-less Key Vault secret identifier to reference the certificates
+### Use NAT gateway for outbound connectivity
-We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/
+Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT gateway for outbound traffic from your virtual networks. NAT gateway scales dynamically and provides secure connections for traffic headed to the internet.
-Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion).
+Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet).
### Enable Active-Active gateways for redundancy
In active-active configuration, both instances of the VPN gateway establish S2S
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
-## Recovery Services
-### Enable soft delete for your Recovery Services vaults
+## SAP for Azure
-The soft delete option helps you retain your backup data in the Recovery Services vault for an extra duration after deletion. This gives you an opportunity to retrieve the data before it's permanently deleted.
+### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads
-Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](../backup/backup-azure-security-feature-cloud.md).
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup.
-### Enable Cross Region Restore for your recovery Services Vault
+Learn more about [Central Server Instance - ConcurrentFencingHAASCSRH (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-Enabling cross region restore for your geo-redundant vaults
+### Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads
-Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your recovery Services Vault)](../backup/backup-azure-arm-restore-vms.md#cross-region-restore).
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload.
-## Search
+Learn more about [Central Server Instance - StonithEnabledHAASCSRH (Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-### You are close to exceeding storage quota of 2GB. Create a Standard search service
+### Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads
-You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
+Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure.
-Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
+Learn more about [Central Server Instance - StonithTimeOutHAASCS (Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service
+### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads
-You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
-Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
+Learn more about [Central Server Instance - CorosyncTokenHAASCSRH (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-### You are close to exceeding your available storage quota. Add additional partitions if you need more storage
+### Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads
-you're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
+In a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure.
-Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
+Learn more about [Central Server Instance - ExpectedVotesHAASCSRH (Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-## Azure SQL
+### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads
-### Enable Azure backup for SQL on your virtual machines
+The corosync token_retransmits_before_loss_const determines how many token retransmits the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup.
-Enable backups for SQL databases on your virtual machines using Azure backup and realize the benefits of zero-infrastructure backup, point-in-time restore, and central management with SQL AG integration.
+Learn more about [Central Server Instance - TokenRestransmitsHAASCSSLE (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [SQL virtual machine - EnableAzBackupForSQL (Enable Azure backup for SQL on your virtual machines)](/azure/backup/backup-azure-sql-database).
+### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads
-## Storage
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
-### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2
+Learn more about [Central Server Instance - CorosyncTokenHAASCSSLE (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities specifically designed for big data analytics, and is built on top of Azure Blob Storage.
+### Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads
-Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
+The corosync max_messages constant specifies the maximum number of messages allowed to be sent by one processor once the token is received. We recommend you set to 20 times the corosync token parameter in Pacemaker cluster configuration.
-### Enable Soft Delete to protect your blob data
+Learn more about [Central Server Instance - CorosyncMaxMessagesHAASCSSLE (Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-After enabling the soft delete option, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
+### Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads
-Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
+The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for ASCS HA setup.
-### Use Managed Disks for storage accounts reaching capacity limit
+Learn more about [Central Server Instance - CorosyncConsensusHAASCSSLE (Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-We have identified that you're using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that don't have account capacity limit. This migration can be done through the portal in less than 5 minutes.
+### Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads
-Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
+In a two node HA cluster, set the quorum parameter expected_votes to 2 as per recommendation for SAP on Azure.
-### Configure blob backup
+Learn more about [Central Server Instance - ExpectedVotesHAASCSSLE (Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Configure blob backup
+### Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads
-Learn more about [Storage Account - ConfigureBlobBackup (Configure blob backup)](/azure/backup/blob-backup-overview).
+In a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
-## Subscriptions
+Learn more about [Central Server Instance - TwoNodesParametersHAASCSSLE (Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data
+### Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads
-Keep your information and applications safe with robust, one click backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares.
+The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for ASCS HA setup.
-Learn more about [Subscription - AzureBackupService (Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data)](/azure/backup/).
+Learn more about [Central Server Instance - CorosyncJoinHAASCSSLE (Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-## Web
+### Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads
-### Consider scaling out your App Service Plan to avoid CPU exhaustion
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration.
-Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
+Learn more about [Central Server Instance - StonithEnabledHAASCS (Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
+### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup
-### Fix the backup database settings of your App Service resource
+Set the stonith-timeout to 900 for reliable function of the Pacemaker for ASCS HA setup. This stonith-timeout setting is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
-Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
+Learn more about [Central Server Instance - StonithTimeOutHAASCSSLE (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc).
+### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads
-### Consider scaling up your App Service Plan SKU to avoid memory exhaustion
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup.
-The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
+Learn more about [Central Server Instance - ConcurrentFencingHAASCSSLE (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory).
+### Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads
-### Scale up your App Service resource to remove the quota limit
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for ASCS HA setup.
-Your app is part of a shared App Service plan and has met its quota multiple times. Once quota is met, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
+Learn more about [Central Server Instance - SoftdogConfigHAASCSSLE (Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp).
+### Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads
-### Use deployment slots for your App Service resource
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for ASCS HA setup.
-You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+Learn more about [Central Server Instance - softdogmoduleloadedHAASCSSLE (Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
+### Ensure that there is one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup
-### Fix the backup storage settings of your App Service resource
+The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there's one instance of fence_azure_arm in the pacemaker configuration for ASCS HA setup. The fence_azure_arm requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
-Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
+Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (Ensure that there's one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc).
+### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads
-### Move your App Service resource to Standard or higher and use deployment slots
+Enable HA ports in the Load balancing rules for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
-You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+Learn more about [Central Server Instance - ASCSHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance).
-Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
+### Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads
-### Consider scaling out your App Service Plan to optimize user experience and availability
+Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
-Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance.
+Learn more about [Central Server Instance - ASCSHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance).
-Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scaling out your App Service Plan to optimize user experience and availability.)](https://aka.ms/appsvcnuminstances).
+### Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads
-### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU
+To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
-The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling.
+Learn more about [Central Server Instance - ASCSHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/).
+### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads
-### Application code should be fixed as worker process crashed due to Unhandled Exception
+Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down.
-We identified the following thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process.
+Learn more about [Central Server Instance - ASCSLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-general-update-november-2021/ba-p/2807619#network-settings-and-tuning-for-sap-on-azure).
-Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html).
+### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS
-### Consider changing your App Service configuration to 64-bit
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload.
-We identified your application is running in 32-bit and the memory is reaching the 2GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
+Learn more about [Database Instance - StonithEnabledHARH (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue).
+### Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads
-## SAP solutions on Azure
+Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure.
-### Review SAP configuration for timeout values used with Azure NetApp Files
+Learn more about [Database Instance - StonithTimeoutHASLE (Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-High availability of SAP while used with Azure NetApp Files relies on setting proper timeout values to prevent disruption to your application. Review the documentation to ensure your configuration meets the timeout values as noted in the documentation.
+### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS
-Learn more about [Volume - SAPTimeoutsANF (Review SAP configuration for timeout values used with Azure NetApp Files)](/azure/sap/workloads/get-started).
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration.
-### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads
+Learn more about [Database Instance - StonithEnabledHASLE (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Enable HA ports in the Load balancing rules for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup
-Learn more about [Central Server Instance - ASCSHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Set the stonith-timeout to 900 for reliable functioning of the Pacemaker for HANA DB HA setup. This setting is important if you're using the Azure fence agent for fencing with either managed identity or service principal.
-### Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads
+Learn more about [Database Instance - StonithTimeOutSuseHDB (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS
-Learn more about [Central Server Instance - ASCSHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
-### Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads
+Learn more about [Database Instance - CorosyncTokenHARH (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
-Learn more about [Central Server Instance - ASCSHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+In a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure.
-### Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads
+Learn more about [Database Instance - ExpectedVotesParamtersHARH (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-Enable HA ports in the Load balancing rules for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS
-Learn more about [Database Instance - DBHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
-### Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads
+Learn more about [Database Instance - CorosyncTokenHASLE (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+### Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup
-Learn more about [Database Instance - DBHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent prefers to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup.
-### Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads
+Learn more about [Database Instance - PreferSiteTakeOverHARH (Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+### Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup
-Learn more about [Database Instance - DBHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup.
-### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads
+Learn more about [Database Instance - ConcurrentFencingHARH (Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
-Disable TCP timestamps on VMs placed behind AzurEnabling TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack causing the load balancer to mark the endpoint as down.
+### Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads
-Learn more about [Central Server Instance - ASCSLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads)](/azure/sap/workloads/sap-hana-high-availability).
+The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent prefers to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup.
-### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads
+Learn more about [Database Instance - PreferSiteTakeoverHDB (Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enable TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack causing the load balancer to mark the endpoint as down.
+### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads
-Learn more about [Database Instance - DBLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads)](/azure/load-balancer/load-balancer-custom-probe-overview).
+The corosync token_retransmits_before_loss_const determines how many token retransmits are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup.
-### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS
+Learn more about [Database Instance - TokenRetransmitsHDB (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload.
+### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
-Learn more about [Database Instance - StonithEnabledHARH (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads.
-### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS
+Learn more about [Database Instance - ExpectedVotesSuseHDB (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+### Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads
-Learn more about [Database Instance - CorosyncTokenHARH (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+In a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
-### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
+Learn more about [Database Instance - TwoNodeParameterSuseHDB (Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-In case of a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure.
+### Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads
-Learn more about [Database Instance - ExpectedVotesParamtersHARH (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup.
-### Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads
+Learn more about [Database Instance - ConcurrentFencingSuseHDB (Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure.
+### Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads
-Learn more about [Database Instance - StonithTimeoutHASLE (Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for HANA DB HA setup.
-### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS
+Learn more about [Database Instance - CorosyncHDB (Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration.
+### Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads
-Learn more about [Database Instance - StonithEnabledHASLE (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The corosync max_messages constant specifies the maximum number of messages allowed to be sent by one processor once the token is received. We recommend that you set 20 times the corosync token parameter in Pacemaker cluster configuration.
-### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS
+Learn more about [Database Instance - CorosyncMaxMessageHDB (Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+### Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads
-Learn more about [Database Instance - CorosyncTokenHASLE (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for HANA DB HA setup.
-### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads
+Learn more about [Database Instance - CorosyncConsensusHDB (Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup.
+### Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads
-Learn more about [Central Server Instance - ConcurrentFencingHAASCSRH (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for HANA DB HA setup.
-### Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads
+Learn more about [Database Instance - SoftdogConfigSuseHDB (Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload.
+### Ensure that there is one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup
-Learn more about [Central Server Instance - StonithEnabledHAASCSRH (Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there's one instance of fence_azure_arm in the pacemaker configuration for HANA DB HA setup. The fence_azure-arm instance requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
-### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads
+Learn more about [Database Instance - FenceAzureArmSuseHDB (Ensure that there's one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+### Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads
-Learn more about [Central Server Instance - CorosyncTokenHAASCSRH (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for HANA DB HA setup.
-### Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup
+Learn more about [Database Instance - SoftdogModuleSuseHDB (Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent should prefer to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup.
+### Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads
-Learn more about [Database Instance - PreferSiteTakeOverHARH (Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
-### Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads
+Learn more about [Database Instance - DBHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-In case of a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure.
+### Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads
-Learn more about [Central Server Instance - ExpectedVotesHAASCSRH (Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
-### Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup
+Learn more about [Database Instance - DBHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup.
+### Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads
-Learn more about [Database Instance - ConcurrentFencingHARH (Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+Enable HA ports in the Load balancing rules for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
-### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads
+Learn more about [Database Instance - DBHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-The corosync token_retransmits_before_loss_const determines how many token retransmits the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup.
+### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads
-Learn more about [Central Server Instance - TokenRestransmitsHAASCSSLE (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down.
-### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads
+Learn more about [Database Instance - DBLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads)](/azure/load-balancer/load-balancer-custom-probe-overview).
-The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
-Learn more about [Central Server Instance - CorosyncTokenHAASCSSLE (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+## Storage
-### Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads
+### Enable soft delete for your Recovery Services vaults
-The corosync max_messages constant specifies the maximum number of messages that may be sent by one processor on receipt of the token. We recommend you set to 20 times the corosync token parameter in Pacemaker cluster configuration.
+The soft delete option helps you retain your backup data in the Recovery Services vault for an extra duration after deletion. The extra duration gives you an opportunity to retrieve the data before it's permanently deleted.
-Learn more about [Central Server Instance - CorosyncMaxMessagesHAASCSSLE (Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](../backup/backup-azure-security-feature-cloud.md).
-### Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads
+### Enable Cross Region Restore for your recovery Services Vault
-The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for ASCS HA setup.
+Enabling cross region restore for your geo-redundant vaults
-Learn more about [Central Server Instance - CorosyncConsensusHAASCSSLE (Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your Recovery Services Vault)](../backup/backup-azure-arm-restore-vms.md#cross-region-restore).
-### Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads
+### Enable Backups on your virtual machines
-In case of a two node HA cluster, set the quorum parameter expected_votes to 2 as per recommendation for SAP on Azure.
+Enable backups for your virtual machines and secure your data
-Learn more about [Central Server Instance - ExpectedVotesHAASCSSLE (Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your virtual machines)](../backup/backup-overview.md).
-### Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads
+### Configure blob backup
-Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure.
+Configure blob backup
-Learn more about [Central Server Instance - StonithTimeOutHAASCS (Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Storage Account - ConfigureBlobBackup (Configure blob backup)](/azure/backup/blob-backup-overview).
-### Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads
+### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data
-The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent should prefer to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup.
+Keep your information and applications safe with robust, one click backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares.
-Learn more about [Database Instance - PreferSiteTakeoverHDB (Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Subscription - AzureBackupService (Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data)](/azure/backup/).
-### Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads
+### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2
-In case of a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
+As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2. Azure Data Lake Storage Gen2 offers advanced capabilities designed for big data analytics, and is built on top of Azure Blob Storage.
-Learn more about [Central Server Instance - TwoNodesParametersHAASCSSLE (Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
-### Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads
+### Enable Soft Delete to protect your blob data
-The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for ASCS HA setup.
+After enabling the soft delete option, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
-Learn more about [Central Server Instance - CorosyncJoinHAASCSSLE (Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
-### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads
+### Use Managed Disks for storage accounts reaching capacity limit
-The corosync token_retransmits_before_loss_const determines how many token retransmits should be attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup.
+We have identified that you're using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that don't have account capacity limit. This migration can be done through the portal in less than 5 minutes.
-Learn more about [Database Instance - TokenRetransmitsHDB (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
-### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
+### Use Managed Disks to improve data reliability
-Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads.
+Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
-Learn more about [Database Instance - ExpectedVotesSuseHDB (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
-### Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads
+### Implement disaster recovery strategies for your Azure NetApp Files Resources
-In case of a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
+To avoid data or functionality loss in the event of a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes
-Learn more about [Database Instance - TwoNodeParameterSuseHDB (Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Volume - ANFCRRCZRRecommendation (Implement disaster recovery strategies for your Azure NetApp Files Resources)](https://aka.ms/anfcrr).
-### Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads
+### Azure NetApp Files Enable Continuous Availability for SMB Volumes
-The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for HANA DB HA setup.
+Recommendation to enable SMB volume for Continuous Availability.
-Learn more about [Database Instance - CorosyncHDB (Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Volume - anfcaenablement (Azure NetApp Files Enable Continuous Availability for SMB Volumes)](https://aka.ms/anfdoc-continuous-availability).
-### Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads
+### Review SAP configuration for timeout values used with Azure NetApp Files
-In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration.
+High availability of SAP while used with Azure NetApp Files relies on setting proper timeout values to prevent disruption to your application. Review the documentation to ensure your configuration meets the timeout values as noted in the documentation.
-Learn more about [Central Server Instance - StonithEnabledHAASCS (Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Volume - SAPTimeoutsANF (Review SAP configuration for timeout values used with Azure NetApp Files)](/azure/sap/workloads/get-started).
-### Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads
-The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup.
-Learn more about [Database Instance - ConcurrentFencingSuseHDB (Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-### Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads
+## Web
-The corosync max_messages constant specifies the maximum number of messages that may be sent by one processor on receipt of the token. We recommend that you set 20 times the corosync token parameter in Pacemaker cluster configuration.
+### Consider scaling out your App Service Plan to avoid CPU exhaustion
-Learn more about [Database Instance - CorosyncMaxMessageHDB (Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
-### Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
-The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for HANA DB HA setup.
+### Fix the backup database settings of your App Service resource
-Learn more about [Database Instance - CorosyncConsensusHDB (Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
-### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc).
-The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup.
+### Consider scaling up your App Service Plan SKU to avoid memory exhaustion
-Learn more about [Central Server Instance - ConcurrentFencingHAASCSSLE (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
-### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory).
-stonith-timeout should be set to 900 for reliable function of the Pacemaker for ASCS HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+### Scale up your App Service resource to remove the quota limit
-Learn more about [Central Server Instance - StonithTimeOutHAASCSSLE (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Your app is part of a shared App Service plan and has met its quota multiple times. Once quota is met, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
-### Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads
+Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp).
-The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster forASCS HA set up.
+### Use deployment slots for your App Service resource
-Learn more about [Central Server Instance - SoftdogConfigHAASCSSLE (Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
-### Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads
+Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
-The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for ASCS HA setup.
+### Fix the backup storage settings of your App Service resource
-Learn more about [Central Server Instance - softdogmoduleloadedHAASCSSLE (Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
-### Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc).
-The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for HANA DB HA setup.
+### Move your App Service resource to Standard or higher and use deployment slots
-Learn more about [Database Instance - SoftdogConfigSuseHDB (Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
-### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup
+Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
-stonith-timeout should be set to 900 for reliable function of the Pacemaker for HANA DB HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+### Consider scaling out your App Service Plan to optimize user experience and availability
-Learn more about [Database Instance - StonithTimeOutSuseHDB (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance.
+
+Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scaling out your App Service Plan to optimize user experience and availability.)](https://aka.ms/appsvcnuminstances).
+
+### Application code needs fixing when the worker process crashes due to Unhandled Exception
-### There should be one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup
+We identified the following thread that resulted in an unhandled exception for your App and the application code must be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process.
-fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there is one instance of fence_azure_arm in the pacemaker configuration for ASCS HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code must be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html).
-Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (There should be one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+### Consider changing your App Service configuration to 64-bit
-### There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup
+We identified your application is running in 32-bit and the memory is reaching the 2GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
-fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there is one instance of fence_azure_arm in the pacemaker configuration for HANA DB HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue).
-Learn more about [Database Instance - FenceAzureArmSuseHDB (There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+### Upgrade your Azure Fluid Relay client library
-### Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library must now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality and enhancements in performance and stability. For more information on the latest version to use and how to upgrade, see the following article.
-The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for HANA DB HA setup.
+Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
+
+### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU
+
+The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling.
+
+Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/).
-Learn more about [Database Instance - SoftdogModuleSuseHDB (Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
## Next steps
ai-services Cognitive Services Encryption Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Encryption/cognitive-services-encryption-keys-portal.md
description: Learn how to use the Azure portal to configure customer-managed keys with Azure Key Vault. Customer-managed keys enable you to create, rotate, disable, and revoke access controls. -+ Last updated 04/07/2021
ai-services Coco Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/coco-verification.md
description: Use a Python script to verify your COCO file for custom model train
-+ Last updated 03/21/2023
ai-services Migrate From Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/migrate-from-custom-vision.md
description: Learn how to generate an annotation file from an old Custom Vision
-+ Last updated 02/06/2023
ai-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/mitigate-latency.md
description: Learn how to mitigate latency when using the Face service.
-+ Last updated 11/07/2021
ai-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md
description: Learn how to create and train a custom model to do image classifica
-+ Last updated 02/06/2023
ai-services Azure Container Instance Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-container-instance-recipe.md
-+ Last updated 12/18/2020
ai-services Container Reuse Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/container-reuse-recipe.md
-+ Last updated 10/28/2021
ai-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md
description: Learn how to run Azure AI services Docker containers disconnected f
-+ Last updated 07/28/2023
ai-services Docker Compose Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/docker-compose-recipe.md
-+ Last updated 10/29/2020
ai-services Encryption Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/encryption-data-at-rest.md
description: Learn how the Language service encrypts your data when it's persist
-+ Last updated 08/08/2022
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/use-containers.md
description: Use Docker containers for the summarization API to summarize text,
--+ Last updated 08/15/2023 keywords: on-premises, Docker, container
-# Use summarization Docker containers on-premises
+# Use summarization Docker containers on-premises
Containers enable you to host the Summarization API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Summarization remotely, then containers might be a good option.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
Azure OpenAI Service is powered by a diverse set of models with different capabi
- `gpt-4` - `gpt-4-32k`
-The `gpt-4` model supports 8192 max input tokens and the `gpt-4-32k` model supports up to 32,768 tokens.
+You can see the token context length supported by each model in the [model summary table](#model-summary-table-and-region-availability).
## GPT-3.5
GPT-3.5 models can understand and generate natural language or code. The most ca
- `gpt-35-turbo-16k` - `gpt-35-turbo-instruct`
-The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens. `gpt-35-turbo-instruct` supports 4097 max input tokens.
+You can see the token context length supported by each model in the [model summary table](#model-summary-table-and-region-availability).
To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Av
These models can only be used with the Chat Completion API.
+GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
+ | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| `gpt-4` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A | 32,768 | September 2021 |
-| `gpt-4` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A<sup>3</sup> | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A<sup>3</sup> | 32,768 | September 2021 |
+| `gpt-4` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A<sup>3</sup> | 8,192 | September 2021 |
+| `gpt-4-32k` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A<sup>3</sup> | 32,768 | September 2021 |
<sup>1</sup> Due to high demand, availability is limited in the region<br> <sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.<br>
+<sup>3</sup> Fine-tuning is not supported for GPT-4 models.
### GPT-3.5 models GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can also be used with the Completions API. GPT3.5 Turbo (0613) only supports the Chat Completions API.
+GPT-3.5 Turbo version 0301 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
+ | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
Our embedding models may be unreliable or pose social risks in certain cases, an
* Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md). * Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
-* Store your embeddings and perform vector (similarity) search using [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) or [Azure Cosmos DB for NoSQL](../../../cosmos-db/rag-data-openai.md)
+* Store your embeddings and perform vector (similarity) search using your choice of Azure service:
+ * [Azure Cognitive Search](../../../search/vector-search-overview.md)
+ * [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md)
+ * [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md)
+ * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md)
+ * [Azure Cache for Redis](../../../azure-cache-for-redis/cache-tutorial-vector-similarity.md)
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
If you created an OpenAI resource solely for completing this tutorial and want t
Learn more about Azure OpenAI's models: > [!div class="nextstepaction"] > [Azure OpenAI Service models](../concepts/models.md)
-* Store your embeddings and perform vector (similarity) search using [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) or [Azure Cosmos DB for NoSQL](../../../cosmos-db/rag-data-openai.md)
+* Store your embeddings and perform vector (similarity) search using your choice of Azure service:
+ * [Azure Cognitive Search](../../../search/vector-search-overview.md)
+ * [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md)
+ * [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md)
+ * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md)
+ * [Azure Cache for Redis](../../../azure-cache-for-redis/cache-tutorial-vector-similarity.md)
ai-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/customize-pronunciation.md
Title: Structured text phonetic pronunciation data
description: Use phonemes to customize pronunciation of words in Speech to text. -+ Last updated 05/08/2022
ai-services Migrate V3 1 To V3 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-1-to-v3-2.md
description: This document helps developers migrate code from v3.1 to v3.2 of th
--+ Last updated 09/15/2023
ai-services Whisper Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/whisper-overview.md
description: In this article, you learn about the Whisper model from OpenAI that
--+ Last updated 09/15/2023
ai-services Translator Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-disconnected-containers.md
description: Learn how to run Azure AI Translator containers in disconnected env
-+ Last updated 07/28/2023
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
The following YAML creates a pod that uses the persistent volume claim *my-azure
```yaml kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: /mnt/azure
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: /mnt/azure
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 09/12/2023 Last updated : 10/07/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
A storage class is used to define how an Azure file share is created. A storage
* **Premium_ZRS**: Premium zone-redundant storage > [!NOTE]
-> Azure Files supports Azure Premium Storage. The minimum premium file share capacity is 100 GiB.
+> Azure Files supports Azure Premium file shares. The minimum file share capacity is 100 GiB. We recommend using Azure Premium file shares instead of Standard file shares because Premium file shares offers higher performance, low-latency disk support for I/O-intensive workloads.
When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
The output of the commands resembles the following example:
[tag-resources]: ../azure-resource-manager/management/tag-resources.md [statically-provision-a-volume]: azure-csi-files-storage-provision.md#statically-provision-a-volume [azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration
-[azure-netapp-files-mount-options-best-practices]: ../azure-netapp-files/performance-linux-mount-options.md#rsize-and-wsize
+[azure-netapp-files-mount-options-best-practices]: ../azure-netapp-files/performance-linux-mount-options.md#rsize-and-wsize
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
... ```
-## Stop cluster upgrades automatically on API breaking changes (Preview)
-
+## Stop cluster upgrades automatically on API breaking changes
To stay within a supported Kubernetes version, you usually have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
All of the following criteria must be met in order for the stop to occur:
* The upgrade operation is a Kubernetes minor version change for the cluster control plane. * The Kubernetes version you're upgrading to is 1.26 or later
-* If performed via REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later.
-* If performed via Azure CLI, the `aks-preview` CLI extension 0.5.154 or later must be installed.
* The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection. * Even API usage that is actually watching for deprecated resources is covered here. Look at the [Verb][k8s-api] for the distinction.
You can also check past API usage by enabling [Container Insights][container-ins
### Bypass validation to ignore API changes > [!NOTE]
-> This method requires you to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes.
+> This method requires you to use the Azure CLI version 2.53 or `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes.
Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command, specifying `enable-force-upgrade`, and setting the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
Here is an overview of all configuration options:
| config.service.auth.azureAd.clientSecret | Secret of the Azure AD app to authenticate with. | Yes, when using Azure AD authentication (unless certificate is specified) | N/A | v2.3+ | | config.service.auth.azureAd.certificatePath | Path to certificate to authenticate with for the Azure AD app. | Yes, when using Azure AD authentication (unless secret is specified) | N/A | v2.3+ | | config.service.auth.azureAd.authority | Authority URL of Azure AD. | No | `https://login.microsoftonline.com` | v2.3+ |
+| config.service.auth.tokenAudience | Audience of token used for Azure AD authentication | No | `https://azure-api.net/configuration` | v2.3+ |
| config.service.endpoint.disableCertificateValidation | Defines if the self-hosted gateway should validate the server-side certificate of the Configuration API. It is recommended to use certificate validation, only disable for testing purposes and with caution as it can introduce security risk. | No | `false` | v2.0+ | The self-hosted gateway provides support for a few authentication options to integrate with the Configuration API which can be defined by using `config.service.auth`.
This guidance helps you provide the required information to define how to authen
| net.server.tls.ciphers.allowed-suites | Comma-separated list of ciphers to use for TLS connection between API client and the self-hosted gateway. | No | `TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA` | v2.0+ | | net.client.tls.ciphers.allowed-suites | Comma-separated list of ciphers to use for TLS connection between the self-hosted gateway and the backend. | No | `TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA` | v2.0+ |
+## Sovereign clouds
+
+Here is an overview of settings that need to be configured to be able to work with sovereign clouds
+
+| Name | Public | Azure China | US Government |
+|--||--|-|
+| config.service.auth.tokenAudience | `https://azure-api.net/configuration` (Default) | `https://azure-api.cn/configuration` | `https://azure-api.us/configuration` |
+ ## How to configure settings ### Kubernetes YAML file
app-service Configure Ssl App Service Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-app-service-certificate.md
By default, App Service certificates have a one-year validity period. Before and
> [!div class="mx-imgBorder"] > ![Screenshot of specified certificate's auto renewal settings.](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
-1. To manually renew the certificate instead, select **Manual Renew**. You can request to manually renew your certificate 60 days before expiration.
+1. To manually renew the certificate instead, select **Manual Renew**. You can request to manually renew your certificate 60 days before expiration, but [the maximum expiration date will be 397 days](https://www.godaddy.com/help/important-notification-about-ssl-offerings-9322).
1. After the renew operation completes, select **Sync**.
app-service Deploy Ci Cd Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ci-cd-custom-container.md
App Service supports CI/CD integration with Azure Container Registry and Docker
When you enable this option, App Service adds a webhook to your repository in Azure Container Registry or Docker Hub. Your repository posts to this webhook whenever your selected image is updated with `docker push`. The webhook causes your App Service app to restart and run `docker pull` to get the updated image.
+> [!NOTE]
+>
+> To ensure the proper functioning of the webhook, it's essential to enable the **Basic Auth Publishing Credentials** option within your Web App. Failure to do so may result in a 401 unauthorized error for the webhook.
+>To verify whether **Basic Auth Publishing Credentials** is enabled, follow these steps:
+>
+> - Navigate to your Web App's **Configuration > General Settings**.
+> - Look for the **Platform Setting** section, where you will find the **Basic Auth Publishing Credentials** option.
+ **For other private registries**, your can post to the webhook manually or as a step in a CI/CD pipeline. In **Webhook URL**, **click** the **Copy** button to get the webhook URL. ::: zone pivot="container-linux"
automation Extension Based Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md
Title: Troubleshoot extension-based Hybrid Runbook Worker issues in Azure Automation description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation extension-based Hybrid Runbook Workers. Previously updated : 04/26/2023 Last updated : 09/06/2023
To help troubleshoot issues with extension-based Hybrid Runbook Workers:
Logs are in `C:\HybridWorkerExtensionLogs`. - For Linux: Logs are in folders </br>`/var/log/azure/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux` and `/home/hweautomation`. +
+### Unable to update Az modules while using the Hybrid Worker
+
+#### Issue
+
+The Hybrid Runbook Worker jobs failed as it was unable to import Az modules.
+
+#### Resolution
+
+As a workaround, you can follow these steps:
+
+1. Go to the folder : C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\7.3.1722.0\HybridAgent
+1. Edit the file with the name *Orchestrator.Sandbox.exe.config*
+1. Add the following lines inside the `<assemblyBinding>` tags:
+```xml
+<dependentAssembly>
+ <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
+ <bindingRedirect oldVersion="0.0.0.0-13.0.0.0" newVersion="13.0.0.0" />
+</dependentAssembly>
+```
+ ### Scenario: Job failed to start as the Hybrid Worker was not available when the scheduled job started #### Issue
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
This article provides information on troubleshooting and resolving issues with A
The Hybrid Runbook Worker depends on an agent to communicate with your Azure Automation account to register the worker, receive runbook jobs, and report status. For Windows, this agent is the Log Analytics agent for Windows. For Linux, it's the Log Analytics agent for Linux. +
+### Unable to update Az modules while using the Hybrid Worker
+
+#### Issue
+
+The Hybrid Runbook Worker jobs failed as it was unable to import Az modules.
+
+#### Resolution
+
+As a workaround, you can follow these steps:
+
+1. Go to the folder : C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\7.3.1722.0\HybridAgent
+1. Edit the file with the name *Orchestrator.Sandbox.exe.config*
+1. Add the following lines inside the `<assemblyBinding>` tags:
+```xml
+<dependentAssembly>
+ <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
+ <bindingRedirect oldVersion="0.0.0.0-13.0.0.0" newVersion="13.0.0.0" />
+</dependentAssembly>
+```
+
+> [!NOTE]
+> The workaround replaces the file with the original if you restart MMA/server either by enabling solution or patching. For both these scenarios, we recommend that you replace the contents.
++ ### <a name="runbook-execution-fails"></a>Scenario: Runbook execution fails #### Issue
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 05/12/2023 Last updated : 10/09/2023
-
+ # Application Insights overview
-Application Insights is an extension of [Azure Monitor](../overview.md) and provides application performance monitoring (APM) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways:
+Azure Monitor Application Insights, a feature of [Azure Monitor](..\overview.md), excels in Application Performance Management (APM) for live web applications.
-- *Proactively* understand how an application is performing.-- *Reactively* review application execution data to determine the cause of an incident.
+## Experiences
-Along with collecting [metrics](standard-metrics.md) and application [telemetry](data-model-complete.md) data, which describe application activities and health, you can use Application Insights to collect and store application [trace logging data](asp-net-trace-logs.md).
+Application Insights provides many experiences to enhance the performance, reliability, and quality of your applications.
-The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs. You rarely need to change the logging framework.
+### Investigate
+- [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance.
+- [Application map](app-map.md): A visual overview of application architecture and components' interactions.
+- [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.
+- [Transaction search](diagnostic-search.md): Trace and diagnose transactions to identify issues and optimize performance.
+- [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints.
+- Performance view: Review application performance metrics and potential bottlenecks.
+- Failures view: Identify and analyze failures in your application to minimize downtime.
-Application Insights provides other features including, but not limited to:
+### Monitoring
+- [Alerts](../alerts/alerts-overview.md): Monitor a wide range of aspects of your application and trigger various actions.
+- [Metrics](../essentials/metrics-getting-started.md): Dive deep into metrics data to understand usage patterns and trends.
+- [Diagnostic settings](../essentials/diagnostic-settings.md): Configure streaming export of platform logs and metrics to the destination of your choice.
+- [Logs](../logs/log-analytics-overview.md): Retrieve, consolidate, and analyze all data collected into Azure Monitoring Logs.
+- [Workbooks](../visualize/workbooks-overview.md): Create interactive reports and dashboards that visualize application monitoring data.
-- [Live Metrics](live-stream.md): Observe activity from your deployed application in real time with no effect on the host environment.-- [Availability](availability-overview.md): Also known as synthetic transaction monitoring. Probe the external endpoints of your applications to test the overall availability and responsiveness over time.-- [GitHub or Azure DevOps integration](release-and-work-item-insights.md?tabs=work-item-integration): Create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in the context of Application Insights data.-- [Usage](usage-overview.md): Understand which features are popular with users and how users interact and use your application.-- [Smart detection](proactive-diagnostics.md): Detect failures and anomalies automatically through proactive telemetry analysis.
+### Usage
+- [Users, sessions, and events](usage-segmentation.md): Determine when, where, and how users interact with your web app.
+- [Funnels](usage-funnels.md): Analyze conversion rates to identify where users progress or drop off in the funnel.
+- [Flows](usage-flows.md): Visualize user paths on your site to identify high engagement areas and exit points.
+- [Cohorts](usage-cohorts.md): Group users by shared characteristics to simplify trend identification, segmentation, and performance troubleshooting.
-Application Insights supports [distributed tracing](distributed-tracing-telemetry-correlation.md), which is also known as distributed component correlation. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a specific execution or transaction. The ability to trace activity from end to end is important for applications that were built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices).
+### Code analysis
+- [Profiler](../profiler/profiler-overview.md): Capture, identify, and view performance traces for your application.
+- [Code optimizations](../insights/code-optimizations.md): Harness AI to create better and more efficient applications.
+- [Snapshot debugger](../snapshot-debugger/snapshot-debugger.md): Automatically collect debug snapshots when exceptions occur in .NET application
-The [Application Map](app-map.md) allows a high-level, top-down view of the application architecture and at-a-glance visual references to component health and responsiveness.
+## Logic model
-To understand the number of Application Insights resources required to cover your application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md).
+The logic model diagram visualizes components of Application Insights and how they interact.
:::image type="content" source="media/app-insights-overview/app-insights-overview-blowout.svg" alt-text="Diagram that shows the path of data as it flows through the layers of the Application Insights service." border="false" lightbox="media/app-insights-overview/app-insights-overview-blowout.svg":::
-Firewall settings must be adjusted for data to reach ingestion endpoints. For more information, see [IP addresses used by Azure Monitor](./ip-addresses.md).
-
-## How do I use Application Insights?
-
-Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) or [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
-
-The Application Insights agent or SDK preprocesses telemetry and metrics before sending the data to Azure. Then it's ingested and processed further before it's stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights.
-
-The easiest way to get started consuming Application insights is through the Azure portal and the built-in visual experiences. Advanced users can [query the underlying data](../logs/log-query-overview.md) directly to [build custom visualizations](tutorial-app-dashboards.md) through Azure Monitor [dashboards](overview-dashboard.md) and [workbooks](../visualize/workbooks-overview.md).
-
-Consider starting with the [Application Map](app-map.md) for a high-level view. Use the [Search](diagnostic-search.md) experience to quickly narrow down telemetry and data by type and date-time. Or you can search within data (for example, with Log Traces) and filter to a given correlated operation of interest.
-
-Two views are especially useful:
--- [Performance view](tutorial-performance.md): Get deep insights into how your application or API and downstream dependencies are performing. You can also find a representative sample to [explore end to end](transaction-diagnostics.md).-- [Failures view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis.-
-[Create Azure Monitor alerts](tutorial-alert.md) to signal potential issues in case your application or components parts deviate from the established baseline.
-
-Application Insights pricing is based on consumption. You only pay for what you use. For more information on pricing, see:
--- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Optimize costs in Azure Monitor](../best-practices-cost.md)-
-## How do I instrument an application?
-
-[Autoinstrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's also the only way to instrument an application in which you don't have access to the source code.
-
-You only need to install the Application Insights SDK if:
--- You require [custom events and metrics](api-custom-events-metrics.md).-- You require control over the flow of telemetry.-- [Autoinstrumentation](codeless-overview.md) isn't available, typically because of language or platform limitations.-
-To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the webpages. The app and its components don't have to be hosted in Azure.
-
-The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small. Tracking calls are nonblocking and batched to be sent in a separate thread.
-
-### [.NET](#tab/net)
-
-Integrated autoinstrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
-
-The [Azure Monitor Application Insights agent](application-insights-asp-net-agent.md) is available for workloads running in on-premises virtual machines.
-
-For a detailed view of all autoinstrumentation supported environments, languages, and resource providers, see [What is autoinstrumentation for Azure Monitor Application Insights?](codeless-overview.md#supported-environments-languages-and-resource-providers).
-
-For other scenarios, the [Application Insights SDK](/dotnet/api/overview/azure/insights) is required.
-
-An [OpenTelemetry](opentelemetry-enable.md?tabs=net) offering is also available.
-
-### [Java](#tab/java)
-
-Integrated autoinstrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md).
-
-Autoinstrumentation is available for any environment by using [Azure Monitor OpenTelemetry-based autoinstrumentation for Java applications](opentelemetry-enable.md?tabs=java).
-
-### [Node.js](#tab/nodejs)
-
-Autoinstrumentation is available for [Azure App Service](azure-web-apps-nodejs.md).
-
-The [Application Insights SDK](nodejs.md) is an alternative. We also have an [OpenTelemetry](opentelemetry-enable.md?tabs=nodejs) offering available.
-
-### [Python](#tab/python)
-
-Python applications can be monitored by using the [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md?tabs=python).
-
-### [JavaScript](#tab/javascript)
-
-JavaScript requires the [Application Insights SDK](javascript.md).
--
+> [!Note]
+> Firewall settings must be adjusted for data to reach ingestion endpoints. For more information, see [IP addresses used by Azure Monitor](./ip-addresses.md).
## Supported languages This section outlines supported scenarios.
+For detailed information about instrumenting applications to enable Application Insights, see [data collection basics](opentelemetry-overview.md).
+ ### Automatic instrumentation (enable without code changes) * [Autoinstrumentation supported environments and languages](codeless-overview.md#supported-environments-languages-and-resource-providers)
We're constantly assessing opportunities to expand our support for other languag
This section provides answers to common questions.
+### How do I instrument an application?
+
+For detailed information about instrumenting applications to enable Application Insights, see [data collection basics](opentelemetry-overview.md).
+
+### How do I use Application Insights?
+
+After enabling Application Insights by [instrumenting an application](opentelemetry-overview.md), we suggest first checking out [Live metrics](live-stream.md) and the [Application map](app-map.md).
+ ### What telemetry does Application Insights collect? From server web apps:
-
+ * HTTP requests. * [Dependencies](./asp-net-dependencies.md). Calls to SQL databases, HTTP calls to external services, Azure Cosmos DB, Azure Table Storage, Azure Blob Storage, and Azure Queue Storage. * [Exceptions](./asp-net-exceptions.md) and stack traces.
From other sources, if you configure them:
* [Log Analytics](../logs/data-collector-api.md) * [Logstash](../logs/data-collector-api.md) +
+### How many Application Insights resources should I deploy?
+To understand the number of Application Insights resources required to cover your application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md).
+ ### How can I manage Application Insights resources with PowerShell? You can [write PowerShell scripts](./powershell.md) by using Azure Resource Monitor to:
We recommend that you use our SDKs and use the [SDK API](./api-custom-events-met
Most Application Insights data has a latency of under 5 minutes. Some data can take longer, which is typical for larger log files. See the [Application Insights service-level agreement](https://azure.microsoft.com/support/legal/sla/application-insights/v1_2/).
-## Troubleshooting
-
-Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/welcome-azure-monitor) for Application Insights.
- ## Help and support ### Azure technical support
Post coding questions to [Stack Overflow](https://stackoverflow.com/questions/ta
Leave product feedback for the engineering team in the [Feedback Community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
+### Troubleshooting
+
+Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/welcome-azure-monitor) for Application Insights.
+ ## Next steps
+- [Data collection basics](opentelemetry-overview.md)
- [Create a resource](create-workspace-resource.md)-- [Autoinstrumentation overview](codeless-overview.md)-- [Overview dashboard](overview-dashboard.md)-- [Availability overview](availability-overview.md)
+- [Automatic instrumentation overview](codeless-overview.md)
+- [Application dashboard](overview-dashboard.md)
- [Application Map](app-map.md)
+- [Live metrics](live-stream.md)
+- [Transaction search](diagnostic-search.md)
+- [Availability overview](availability-overview.md)
+- [Users, sessions, and events](usage-segmentation.md)
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
In some scenarios, combining this data can result in cost savings. Typically, th
## Workspaces with Microsoft Defender for Cloud
-[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/). It provides 500 MB per server per day of data allocation that's applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
+[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/defender-for-cloud/). It provides 500 MB per server per day of data allocation that's applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
-- [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent) - [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert) - [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline) - [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary) - [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection) - [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent) - [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)-- [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog) - [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent) - [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)-- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. See [What data types are included in the 500-MB data daily allowance?](../../defender-for-cloud/faq-defender-for-servers.yml).
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled.
+
+If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data. To learn more on how Microsoft Sentinel customers can benefit, please see the [Microsoft Sentinel Pricing page](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
To delete a table using PowerShell:
You can modify the schema of custom tables and add custom columns to, or delete columns from, a standard table. > [!NOTE]
-> Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`.
+> Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names.
# [Portal](#tab/azure-portal-3)
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
The API call enables all DCR-based custom logs features on the table. The Data C
> [!IMPORTANT] > - Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`).
-> - The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`.
+> - `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names.
> - Custom columns you add to an Azure table must have the suffix `_CF`. > - If you update the table schema in your Log Analytics workspace, you must also update the input stream definition in the data collection rule to ingest data into new or modified columns.
azure-monitor Ingest Logs Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md
To create a custom table into which to ingest events, in the Azure portal:
> [!IMPORTANT] > - Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`).
-> - The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`.
+> - `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names.
> - Column names are case-sensitive. Make sure to use the correct case in your data collection rule. ## Create a data collection endpoint
With [managed identity](../../active-directory/managed-identities-azure-resource
:::image type="content" source="media/ingest-logs-event-hub/event-hub-add-role-assignment.png" lightbox="media/ingest-logs-event-hub/event-hub-add-role-assignment.png" alt-text="Screenshot that shows the Access control screen for the data collection rule.":::
-2. Select **Azure Event Hubs Data Receiver** and select **Next**.
+1. Select **Azure Event Hubs Data Receiver** and select **Next**.
:::image type="content" source="media/ingest-logs-event-hub/event-hub-data-receiver-role-assignment.png" lightbox="media/ingest-logs-event-hub/event-hub-data-receiver-role-assignment.png" alt-text="Screenshot that shows the Add Role Assignment screen for the event hub with the Azure Event Hubs Data Receiver role highlighted.":::
-1. Select **Managed identity** for **Assign access to** and click **Select members**. Select **Data collection rule**, search your DCR by name and click **Select**.
+1. Select **Managed identity** for **Assign access to** and click **Select members**. Select **Data collection rule**, search for your data collection rule by name, and click **Select**.
-[ ![Screenshot of how to assign access to managed identity.](media/ingest-logs-event-hub/assign-access-to-managed-identity.png) ](media/ingest-logs-event-hub/assign-access-to-managed-identity.png#lightbox)
+ :::image type="content" source="media/ingest-logs-event-hub/assign-access-to-managed-identity.png" lightbox="media/ingest-logs-event-hub/assign-access-to-managed-identity.png" alt-text="Screenshot that shows how to assign access to managed identity.":::
-4. Select **Review + assign** and verify the details before saving your role assignment.
+1. Select **Review + assign** and verify the details before saving your role assignment.
:::image type="content" source="media/ingest-logs-event-hub/event-hub-add-role-assignment-save.png" lightbox="media/ingest-logs-event-hub/event-hub-add-role-assignment-save.png" alt-text="Screenshot that shows the Review and Assign tab of the Add Role Assignment screen.":::
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
The following tables can receive data from the ingestion API.
| Azure tables | The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent) > [!NOTE]
-> Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`. Custom columns you add to an Azure table must have the suffix `_CF`.
+> Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names. Custom columns you add to an Azure table must have the suffix `_CF`.
## Authentication
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
In the following diagram, VNet1 uses the Open mode and VNet2 uses the Private On
![Diagram that shows mixed access modes.](./media/private-link-security/ampls-mixed-access-modes.png) ## Consider AMPLS limits
-The AMPLS object has the following limits:
-* A virtual network can connect to only *one* AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources to which the virtual network should have access.
-* An AMPLS object can connect to 300 Log Analytics workspaces and 1,000 Application Insights components at most.
-* An Azure Monitor resource (workspace or Application Insights component or [data collection endpoint](../essentials/data-collection-endpoint-overview.md)) can connect to five AMPLSs at most.
-* An AMPLS object can connect to 10 private endpoints at most.
-> [!NOTE]
-> AMPLS resources created before December 1, 2021, support only 50 resources.
In the following diagram: * Each virtual network connects to only *one* AMPLS object.
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
[!INCLUDE [monitoring-limits-application-insights](../../includes/application-insights-limits.md)]
+## Azure Monitor Private Link Scope (AMPLS)
++ ## Next steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)
batch Batch Compute Node Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-compute-node-environment-variables.md
The command lines executed by tasks on compute nodes don't run under a shell. Th
## Environment variables
+> [!NOTE]
+> `AZ_BATCH_AUTHENTICATION_TOKEN` is deprecated and will be retired on September 30, 2024. See the [announcement](https://azure.microsoft.com/updates/azure-batch-task-authentication-token-will-be-retired-on-30-september-2024/) for details and alternative implementation.
++ | Variable name | Description | Availability | Example | |--|--|--|| | AZ_BATCH_ACCOUNT_NAME | The name of the Batch account that the task belongs to. | All tasks. | mybatchaccount | | AZ_BATCH_ACCOUNT_URL | The URL of the Batch account. | All tasks. | `https://myaccount.westus.batch.azure.com` | | AZ_BATCH_APP_PACKAGE | A prefix of all the app package environment variables. For example, if Application "FOO" version "1" is installed onto a pool, the environment variable is AZ_BATCH_APP_PACKAGE_FOO_1 (on Linux) or AZ_BATCH_APP_PACKAGE_FOO#1 (on Windows). AZ_BATCH_APP_PACKAGE_FOO_1 points to the location that the package was downloaded (a folder). When using the default version of the app package, use the AZ_BATCH_APP_PACKAGE environment variable without the version numbers. If in Linux, and the application package name is "Agent-linux-x64" and the version is "1.1.46.0, the environment name is actually: AZ_BATCH_APP_PACKAGE_agent_linux_x64_1_1_46_0, using underscores and lower case. For more information, see [Execute the installed applications](batch-application-packages.md#execute-the-installed-applications) for more details. | Any task with an associated app package. Also available for all tasks if the node itself has application packages. | AZ_BATCH_APP_PACKAGE_FOO_1 (Linux) or AZ_BATCH_APP_PACKAGE_FOO#1 (Windows) |
-| AZ_BATCH_AUTHENTICATION_TOKEN | An authentication token that grants access to a limited set of Batch service operations. This environment variable is only present if the [authenticationTokenSettings](/rest/api/batchservice/task/add#authenticationtokensettings) are set when the [task is added](/rest/api/batchservice/task/add#request-body). The token value is used in the Batch APIs as credentials to create a Batch client, such as in the [BatchClient.Open() .NET API](/dotnet/api/microsoft.azure.batch.batchclient.open#Microsoft_Azure_Batch_BatchClient_Open_Microsoft_Azure_Batch_Auth_BatchTokenCredentials_). | All tasks. | OAuth2 access token |
+| AZ_BATCH_AUTHENTICATION_TOKEN | An authentication token that grants access to a limited set of Batch service operations. This environment variable is only present if the [authenticationTokenSettings](/rest/api/batchservice/task/add#authenticationtokensettings) are set when the [task is added](/rest/api/batchservice/task/add#request-body). The token value is used in the Batch APIs as credentials to create a Batch client, such as in the [BatchClient.Open() .NET API](/dotnet/api/microsoft.azure.batch.batchclient.open#Microsoft_Azure_Batch_BatchClient_Open_Microsoft_Azure_Batch_Auth_BatchTokenCredentials_). The token doesn't support private networking. | All tasks. | OAuth2 access token |
| AZ_BATCH_CERTIFICATES_DIR | A directory within the [task working directory](files-and-directories.md) in which certificates are stored for Linux compute nodes. This environment variable does not apply to Windows compute nodes. | All tasks. | /mnt/batch/tasks/workitems/batchjob001/job-1/task001/certs | | AZ_BATCH_HOST_LIST | The list of nodes that are allocated to a [multi-instance task](batch-mpi.md) in the format `nodeIP,nodeIP`. | Multi-instance primary and subtasks. | `10.0.0.4,10.0.0.5` | | AZ_BATCH_IS_CURRENT_NODE_MASTER | Specifies whether the current node is the master node for a [multi-instance task](batch-mpi.md). Possible values are `true` and `false`.| Multi-instance primary and subtasks. | `true` |
communication-services Phone Number Management For Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md
Use the below tables to find all the relevant information on number availability
| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : |
-| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Toll-Free |- | - | - | Public Preview\* |
| Alphanumeric Sender ID\** | Public Preview | - | - | - | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Contact Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md
Developers interested in scheduled business-to-consumer interactions should read
The term ΓÇ£contact centerΓÇ¥ captures a large family of applications diverse across scale, channels, and organizational approach: - **Scale**. Small businesses may have a small number of employees operating as agents in a limited role, for example a restaurant offering a phone number for reservations. While an airline may have thousands of employees and vendors providing a 24/7 contact center.-- **Channel**. Organizations can reach consumers through the phone system, apps, SMS, or consumer communication platforms such as WhatsApp.
+- **Channel**. Organizations can reach consumers through the phone system, apps, SMS, or consumer communication platforms.
- **Organizational approach**. Most businesses have employees operate as agents using Teams or a licensed contact center as a service software (CCaaS). Other businesses may out-source the agent role or use specialized service providers who fully operate contact centers as a service. ## User Personas
communications-gateway Configure Test Customer Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/configure-test-customer-teams-direct-routing.md
+
+ Title: Set up a test tenant for Microsoft Teams Direct Routing with Azure Communications Gateway
+description: Learn how to configure Azure Communications Gateway and Microsoft 365 for a Microsoft Teams Direct Routing customer for testing.
++++ Last updated : 10/09/2023+
+#CustomerIntent: As someone deploying Azure Communications Gateway, I want to test my deployment so that I can be sure that calls work.
++
+# Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway
+
+Testing Microsoft Teams Direct Routing requires some test numbers in a Microsoft 365 tenant, as if you're providing service to a real customer. We call this tenant (which you control) a _test customer tenant_, corresponding to your _test customer_ (to which you allocate the test numbers). Setting up a test customer requires configuration in the test customer tenant and on Azure Communications Gateway. This article explains how to set up that configuration. You can then configure test users and numbers in the tenant and start testing.
+
+> [!TIP]
+> When you onboard a real customer, you'll typically need to ask them to change their tenant's configuration, because your organization won't have permission. You'll still need to make configuration changes on Azure Communications Gateway.
+>
+> For more information about how Azure Communications Gateway and Microsoft Teams use tenant configuration to route calls, see [Support for multiple customers with the Microsoft Teams multitenant model](interoperability-teams-direct-routing.md#support-for-multiple-customers-with-the-microsoft-teams-multitenant-model).
+
+## Prerequisites
+
+You must have a Microsoft 365 tenant that you can use as a test customer. You must have at least one number that you can allocate to this test customer.
+
+You must have completed the following procedures.
+
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md)
+- [Deploy Azure Communications Gateway](deploy.md)
+- [Connect Azure Communications Gateway to Microsoft Teams Direct Routing](connect-teams-direct-routing.md)
+
+Your organization must have integrated with Azure Communications Gateway's Provisioning API. Someone in your organization must be able to make requests using the Provisioning API during this procedure.
+
+You must be able to sign in to the Microsoft 365 admin center for your test customer tenant as a Global Administrator.
+
+## Choose a DNS subdomain label to use to identify the customer
+
+Choose a DNS label to identify the test customer. This label is used to create a subdomain of each per-region domain name for your Azure Communications Gateway. Microsoft Phone System and Azure Communications Gateway use this subdomain to match calls to tenants.
+
+The label can only contain letters, numbers, underscores and dashes. It can be up to 63 characters in length. You must not use wildcard subdomains or subdomains with multiple labels.
+
+For example, you could allocate the label `test`. Azure Communications Gateway's per-region domain names might be `pstn-region1.xyz.commsgw.azure.example.com` and `pstn-region2.xyz.commsgw.azure.example.com`. The label combined with the per-region domain names would therefore create the customer-specific domain names `test.pstn-region1.xyz.commsgw.azure.example.com` and `test.pstn-region2.xyz.commsgw.azure.example.com`.
+
+Make a note of the label you choose and the corresponding subdomains.
+
+## Start registering the subdomains in the customer tenant and get DNS TXT values
+
+To route calls to a customer tenant, the customer tenant must be configured with the customer-specific per-region domain names that you allocated in [Choose a DNS subdomain label to use to identify the customer](#choose-a-dns-subdomain-label-to-use-to-identify-the-customer). Microsoft 365 then requires you (as the carrier) to create DNS records that use a verification code that Microsoft 365 supplies in the customer tenant.
+
+1. Sign into the Microsoft 365 admin center for the customer tenant as a Global Administrator.
+1. Using [Add a subdomain to the customer tenant and verify it](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-subdomain-to-the-customer-tenant-and-verify-it):
+ 1. Register the first customer-specific per-region domain name (for example `test.pstn-region1.xyz.commsgw.azure.example.com`).
+ 1. Start the verification process using TXT records.
+ 1. Note the TXT value that Microsoft 365 provides.
+1. Repeat the previous step for the second customer-specific per-region domain name.
+
+> [!IMPORTANT]
+> Don't complete the verification process yet. You must carry out [Use Azure Communications Gateway's Provisioning API to configure the customer and generate DNS records](#use-azure-communications-gateways-provisioning-api-to-configure-the-customer-and-generate-dns-records) first.
+
+## Use Azure Communications Gateway's Provisioning API to configure the customer and generate DNS records
+
+Azure Communications Gateway includes a DNS server. You must use Azure Communications Gateway to create the DNS records required to verify the customer subdomain. To generate the records, provision the details of the customer tenant and the DNS TXT values on Azure Communications Gateway.
+
+1. Use Azure Communications Gateway's Provisioning API to configure the customer as an account. The request must:
+ - Enable Direct Routing for the account.
+ - Specify the label for the subdomain that you chose (for example, `test`).
+ - Specify the DNS TXT values from [Start registering the subdomains in the customer tenant and get DNS TXT values](#start-registering-the-subdomains-in-the-customer-tenant-and-get-dns-txt-values). These values allow Azure Communications Gateway to generate DNS records for the subdomain.
+2. Use the Provisioning API to confirm that the DNS records have been generated, by checking the `direct_routing_provisioning_state` for the account.
+
+## Finish verifying the domains in the customer tenant
+
+When you have used Azure Communications Gateway to generate the DNS records for the customer subdomains, verify the subdomains in the Microsoft 365 admin center for your customer tenant.
+
+1. Sign into the Microsoft 365 admin center for the customer tenant as a Global Administrator.
+1. Select **Settings** > **Domains**.
+1. Finish verifying the two customer-specific per-region domain names by following [Add a subdomain to the customer tenant and verify it](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-subdomain-to-the-customer-tenant-and-verify-it).
+
+## Configure the customer tenant's call routing to use Azure Communications Gateway
+
+In the customer tenant, [configure a call routing policy](/microsoftteams/direct-routing-voice-routing) (also called a voice routing policy) with a voice route that routes calls to Azure Communications Gateway.
+- Set the PSTN gateway to the customer-specific per-region domain names for Azure Communications Gateway (for example, `test.pstn-region1.xyz.commsgw.azure.example.com` and `test.pstn-region2.xyz.commsgw.azure.example.com`).
+- Don't configure any users to use the call routing policy yet.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Configure test numbers](configure-test-numbers-teams-direct-routing.md)
communications-gateway Configure Test Numbers Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/configure-test-numbers-teams-direct-routing.md
+
+ Title: Set up test numbers for Microsoft Teams Direct Routing with Azure Communications Gateway
+description: Learn how to configure Azure Communications Gateway and Microsoft 365 with Microsoft Teams Direct Routing numbers for testing.
++++ Last updated : 10/09/2023+
+#CustomerIntent: As someone deploying Azure Communications Gateway, I want to test my deployment so that I can be sure that calls work.
++
+# Configure test numbers for Microsoft Teams Direct Routing with Azure Communications Gateway
+
+To test Microsoft Teams Direct Routing with Azure Communications Gateway, you need a test customer tenant with test users and numbers. By following this article, you can set up the required user and number configuration in the customer Microsoft 365 tenant, on Azure Communications Gateway and in your network. You can then start testing.
+
+> [!TIP]
+> When you allocate numbers to a real customer, you'll typically need to ask them to change their tenant's configuration, because your organization won't have permission. You'll still need to make configuration changes on Azure Communications Gateway and to your network.
+
+## Prerequisites
+
+You must have at least one number that you can allocate to your test tenant.
+
+You must have completed the following procedures.
+
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md)
+- [Deploy Azure Communications Gateway](deploy.md)
+- [Connect Azure Communications Gateway to Microsoft Teams Direct Routing](connect-teams-direct-routing.md)
+- [Configure a test customer for Microsoft Teams Direct Routing](configure-test-customer-teams-direct-routing.md)
+
+Your organization must have integrated with Azure Communications Gateway's Provisioning API. Someone in your organization must be able to make requests using the Provisioning API during this procedure.
+
+You must be able to sign in to the Microsoft 365 admin center for your test customer tenant as a Global Administrator.
+
+## Configure the test numbers on Azure Communications Gateway with the Provisioning API
+
+In [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md), you configured Azure Communications Gateway with an account for the test customer.
+
+Use Azure Communications Gateway's Provisioning API to provision the details of the numbers you chose under the account. Enable each number for Teams Direct Routing.
+
+## Update your network's routing configuration
+
+Update your network configuration to route calls involving the test numbers to Azure Communications Gateway. For more information about how to route calls to Azure Communications Gateway, see [Call routing requirements](reliability-communications-gateway.md#call-routing-requirements).
+
+## Configure users in the test customer tenant
+
+### Create a user and assign a Teams Phone license
+
+Follow [Create a user and assign the license](/microsoftteams/direct-routing-enable-users#create-a-user-and-assign-the-license).
+
+If you are migrating users from Skype for Business Server Enterprise Voice, you must also [ensure that the user is homed online](/microsoftteams/direct-routing-enable-users#ensure-that-the-user-is-homed-online).
+
+### Configure phone numbers for the user and enable enterprise voice
+
+Follow [Configure the phone number and enable enterprise voice](/microsoftteams/direct-routing-enable-users#create-a-user-and-assign-the-license) to assign phone numbers and enable calling.
+
+### Assign Teams Only mode to users
+
+Follow [Assign Teams Only mode to users to ensure calls land in Microsoft Teams](/microsoftteams/direct-routing-enable-users#assign-teams-only-mode-to-users-to-ensure-calls-land-in-microsoft-teams). This step ensures that incoming calls ring in the Microsoft Teams client.
+
+### Assign the voice routing policy with Azure Communications Gateway to users
+
+In [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md), you set up a voice route that route calls to Azure Communications Gateway. Assign the voice route to the test users by following the steps for assigning voice routing policies in [Configure call routing for Direct Routing](/microsoftteams/direct-routing-voice-routing).
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Prepare for live traffic](prepare-for-live-traffic-teams-direct-routing.md)
+
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
Title: Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile
-description: After deploying Azure Communications Gateway, you must configure it to connect to the Operator Connect and Teams Phone Mobile environments.
+description: After deploying Azure Communications Gateway, you can configure it to connect to the Operator Connect and Teams Phone Mobile environments.
Previously updated : 07/07/2023 Last updated : 10/09/2023 - template-how-to-pattern - has-azure-ad-ps-ref
-# Connect to Operator Connect or Teams Phone Mobile
+# Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile
After you have deployed Azure Communications Gateway, you need to connect it to the Microsoft Phone System and to your core network. You also need to onboard to the Operator Connect or Teams Phone Mobile environments.
You must have carried out all the steps in [Deploy Azure Communications Gateway]
You must have access to a user account with the Azure Active Directory Global Admin role.
-## 1. Add the Project Synergy application to your Azure tenancy
+## Add the Project Synergy application to your Azure tenancy
> [!NOTE]
->This step and the next step ([2. Assign an Admin user to the Project Synergy application](#2-assign-an-admin-user-to-the-project-synergy-application)) set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. If you've already gone through onboarding, go to [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+>This step and the next step ([Assign an Admin user to the Project Synergy application](#assign-an-admin-user-to-the-project-synergy-application)) set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. If you've already gone through onboarding, go to [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
The Operator Connect and Teams Phone Mobile programs require your Azure Active Directory tenant to contain a Microsoft application called Project Synergy. Operator Connect and Teams Phone Mobile inherit permissions and identities from your Azure Active Directory tenant through the Project Synergy application. The Project Synergy application also allows configuration of Operator Connect or Teams Phone Mobile and assigning users and groups to specific roles.
To add the Project Synergy application:
New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect" ```
-## 2. Assign an Admin user to the Project Synergy application
+## Assign an Admin user to the Project Synergy application
The user who sets up Azure Communications Gateway needs to have the Admin user role in the Project Synergy application. Assign them this role in the Azure portal.
The user who sets up Azure Communications Gateway needs to have the Admin user r
1. Select **Add user/group**. 1. Specify the user you want to use for setting up Azure Communications Gateway and give them the **Admin** role.
-## 3. Find the Object ID and Application ID for your Azure Communication Gateway resource
+## Find the Object ID and Application ID for your Azure Communication Gateway resource
-Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway) and [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway) and [Add the Application ID for Azure Communications Gateway to Operator Connect](#add-the-application-id-for-azure-communications-gateway-to-operator-connect).
1. Sign in to the [Azure portal](https://azure.microsoft.com/). 1. In the search bar at the top of the page, search for your Communications Gateway resource.
Each Azure Communications Gateway resource automatically receives a [system-assi
1. Check that the **Object ID** matches the **Object (principal) ID** value that you copied. 1. Make a note of the **Application ID**.
-## 4. Set up application roles for Azure Communications Gateway
+## Set up application roles for Azure Communications Gateway
-Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application in [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy).
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application in [Add the Project Synergy application to your Azure tenancy](#add-the-project-synergy-application-to-your-azure-tenancy).
> [!IMPORTANT]
-> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [Add the Application ID for Azure Communications Gateway to Operator Connect](#add-the-application-id-for-azure-communications-gateway-to-operator-connect).
Do the following steps in the tenant that contains your Project Synergy application.
Do the following steps in the tenant that contains your Project Synergy applicat
```azurepowershell Connect-AzureAD -TenantId "<AADTenantID>" ```
-1. Run the following cmdlet, replacing *`<CommunicationsGatewayObjectID>`* with the Object ID you noted down in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+1. Run the following cmdlet, replacing *`<CommunicationsGatewayObjectID>`* with the Object ID you noted down in [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
```azurepowershell $commGwayObjectId = "<CommunicationsGatewayObjectID>" ```
Do the following steps in the tenant that contains your Project Synergy applicat
```
-## 5. Provide additional information to your onboarding team
+## Provide additional information to your onboarding team
> [!NOTE] >This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip this step if you have already onboarded to TPM or OC.
Before your onboarding team can finish onboarding you to the Operator Connect an
If you don't already have an onboarding team, contact azcog-enablement@microsoft.com, providing your Azure subscription ID and contact details.
-## 6. Test your Operator Connect portal access
+## Test your Operator Connect portal access
> [!IMPORTANT] > Before testing your Operator Connect portal access, wait for your onboarding team to confirm that the onboarding process is complete. Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
-## 7. Add the Application ID for Azure Communications Gateway to Operator Connect
+## Add the Application ID for Azure Communications Gateway to Operator Connect
-You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway).
+You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway).
-To enable the application, add the Application ID of the system-assigned managed identity representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment. You found this ID in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+To enable the application, add the Application ID of the system-assigned managed identity representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment. You found this ID in [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration). 1. Add a new **Application Id**, using the Application ID that you found.
-## 8. Register your deployment's domain name in Active Directory
+## Register your deployment's domain name in Active Directory
Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN). You need to add this domain name to your Active Directory tenant as a custom domain name, share the details with your onboarding team and then verify the domain name. This process confirms that you own the domain.
communications-gateway Connect Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-teams-direct-routing.md
+
+ Title: Connect Azure Communications Gateway to Microsoft Teams Direct Routing
+description: After deploying Azure Communications Gateway, you can configure it to connect to the Microsoft Phone System for Microsoft Teams Direct Routing.
++++ Last updated : 10/09/2023+
+ - template-how-to-pattern
++
+# Connect Azure Communications Gateway to Microsoft Teams Direct Routing
+
+After you have deployed Azure Communications Gateway, you need to connect it to the Microsoft Phone System and to your core network.
+
+This article describes how to start setting up Azure Communications Gateway for Microsoft Teams Direct Routing. When you have finished the steps in this article, you can set up test users for test calls and prepare for live traffic.
+
+## Prerequisites
+
+You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md).
+
+Your organization must have integrated with Azure Communications Gateway's Provisioning API.
+
+You must have **Reader** access to the subscription into which Azure Communications Gateway is deployed.
+
+You must be able to sign in to the Microsoft 365 admin center for your tenant as a Global Administrator.
+
+## Find your Azure Communication Gateway's domain names
+
+Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN) and regional subdomains of this domain.
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. Select your Communications Gateway resource. Check that you're on the **Overview** of your Azure Communications Gateway resource.
+1. Select **Properties**.
+1. Find the field named **Domain**. This name is your deployment's _base domain name_.
+1. In each **Service Location** section, find the **Hostname** field. This field provides the _per-region domain name_. Your deployment has two service regions and therefore two per-region domain names.
+1. Note down the base domain name and the per-region domain names. You'll need these values in the next steps.
+
+## Register the base domain name for Azure Communications Gateway in your tenant
+
+You need to register the base domain for Azure Communications Gateway in your tenant and verify it. Registering and verifying the base domain proves that you control the domain.
+
+> [!TIP]
+> If the base domain name is a subdomain of a domain already registered and verified in this tenant:
+> - You must register Azure Communications Gateway's base domain name.
+> - Microsoft 365 automatically verifies the base domain name.
+
+Follow the instructions [to add a base domain to your tenant](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-base-domain-to-the-tenant-and-verify-it). Use the base domain name that you found in [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names).
+
+If Microsoft 365 prompts you to verify the domain name:
+
+1. Select DNS TXT records as your verification method with **Add a TXT record instead**.
+1. Select **Next**, and note the TXT value that Microsoft 365 provides.
+1. Provide the TXT value to your onboarding team as part of [Provide additional information to your onboarding team](#provide-additional-information-to-your-onboarding-team).
+
+Don't try to finish verifying the domain name until your onboarding team has confirmed that DNS records with the TXT value have been set up.
+
+## Provide additional information to your onboarding team
+
+Before your onboarding team can finish onboarding you to the Microsoft Teams Direct Routing environment, you need to provide them with some additional information.
+
+1. Wait for your onboarding team to provide you with a form to collect the additional information.
+1. Complete the form and give it to your onboarding team.
+
+If you don't already have an onboarding team, contact azcog-enablement@microsoft.com, providing your Azure subscription ID and contact details.
+
+## Finish verifying the base domain name in Microsoft 365
+
+> [!NOTE]
+> If Microsoft 365 did not prompt you to verify the domain in [Register the base domain name for Azure Communications Gateway in your tenant](#register-the-base-domain-name-for-azure-communications-gateway-in-your-tenant), skip this step.
+
+After your onboarding team confirm the DNS records have been set up, finish verifying the base domain name in the Microsoft 365 admin center.
+
+1. Sign into the Microsoft 365 admin center as a Global Administrator.
+1. Select **Settings** > **Domains**.
+1. Select the base domain.
+1. On the **Choose your online services** page, clear all options and select **Next**.
+1. Select **Finish** on the **Update DNS settings** page.
+1. Ensure that the status is **Setup complete**.
+
+## Set up a user or resource account with the base domain and an appropriate license
+
+To activate the base domain in Microsoft 365, you must have at least one user or resource account licensed for Microsoft Teams. For more information, including the licenses you can use, see [Activate the domain name](/microsoftteams/direct-routing-sbc-multiple-tenants#activate-the-domain-name).
+
+## Connect your tenant to Azure Communications Gateway
+
+You most configure your Microsoft 365 tenant with two SIP trunks to Azure Communications Gateway. Each trunk connects to one of the per-region domain names that you found in [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names).
+
+Follow [Connect your Session Border Controller (SBC) to Direct Routing](/microsoftteams/direct-routing-connect-the-sbc), using the following configuration settings.
+
+| Teams Admin Center setting | PowerShell parameter | Value to use (Admin Center / PowerShell) |
+| -- | -- | |
+| **Add an FQDN for the SBC** | `FQDN` |The regional domain of Azure Communications Gateway |
+| **Enabled** | `Enabled` | On / True |
+| **SIP signaling port** | `SipSignalingPort` | 5063 |
+| **Send SIP options** | `SendSIPOptions` | On / True |
+| **Forward call history** | `ForwardCallHistory` | On / True|
+| **Forward P-Asserted-identity (PAI) header** | `ForwardPAI` | On / True |
+| **Concurrent call capacity** | `MaxConcurrentSessions` | Leave as default |
+| **Failover response codes** | `FailoverResponseCodes` |Leave as default|
+| **Failover times (seconds)** | `FailoverTimeSeconds` |Leave as default|
+| **SBC supports PIDF/LO for emergency calls** | `PidfloSupported` | On / True |
+| - | `MediaBypass` |- / False|
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Configure a test customer](configure-test-customer-teams-direct-routing.md)
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Previously updated : 09/06/2023 Last updated : 10/09/2023 # Deploy Azure Communications Gateway
This article guides you through planning for and creating an Azure Communication
You must have completed [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md). [!INCLUDE [communications-gateway-deployment-prerequisites](includes/communications-gateway-deployment-prerequisites.md)]
-## 1. Collect basic information for deploying an Azure Communications Gateway
+## Collect basic information for deploying an Azure Communications Gateway
Collect all of the values in the following table for the Azure Communications Gateway resource.
You must have completed [Prepare to deploy Azure Communications Gateway](prepare
|The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**| |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**| |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region**
- |The voice codecs to use between Azure Communications Gateway and your network. |**Instance details: Supported Codecs**|
- |The Unified Communications as a Service (UCaaS) service(s) Azure Communications Gateway should support. Choose from Teams Phone Mobile and Operator Connect. |**Instance details: Supported Voice Platforms**|
- |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Services Routing Proxy (US only). |**Instance details: Emergency call handling**|
- |The scope at which Azure Communications Gateway's autogenerated domain name label is unique. Communications Gateway resources get assigned an autogenerated domain name label that depends on the name of the resource. You'll need to register the domain name later when you deploy Azure Communications Gateway. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**|
- |The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Instance details: Teams Voicemail Pilot Number**|
- |A list of dial strings used for emergency calling.|**Instance details: Emergency Dial Strings**|
- | How you plan to use Mobile Control Point (MCP) to route Teams Phone Mobile calls to Microsoft Phone System. Choose from **Integrated** (to deploy MCP in Azure Communications Gateway), **On-premises** (to use an existing on-premises MCP) or **None** (if you don't plan to offer Teams Phone Mobile or you'll use another method to route calls). |**Instance details: MCP**|
+ |The voice codecs to use between Azure Communications Gateway and your network. |**Call Handling: Supported codecs**|
+ |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Routing Service Provider (US only; only for Operator Connect or Teams Phone Mobile). |**Call Handling: Emergency call handling**|
+ |A list of dial strings used for emergency calling.|**Call Handling: Emergency dial strings**|
+ |Whether to use an autogenerated `*.commsgw.azure.com` domain name or to use a subdomain of your own domain by delegating it to Azure Communications Gateway. For more information on this choice, see [the guidance on creating a network design](prepare-to-deploy.md#create-a-network-design). | **DNS: Domain name options** |
+ |(Required if you choose an autogenerated domain) The scope at which the autogenerated domain name label for Azure Communications Gateway is unique. Communications Gateway resources are assigned an autogenerated domain name label that depends on the name of the resource. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**DNS: Auto-generated Domain Name Scope**|
+ | (Required if you choose a delegated domain) The domain to delegate to this Azure Communications Gateway deployment | **DNS: DNS domain name** |
-## 2. Collect Service Regions configuration values
+## Collect configuration values for service regions
Collect all of the values in the following table for both service regions in which you want to deploy Azure Communications Gateway.
Collect all of the values in the following table for both service regions in whi
|The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**| |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**|
-## 3. Collect Test Lines configuration values
+## Collect configuration values for each communications service
+
+Collect the values for the communications services that you're planning to support.
+
+> [!IMPORTANT]
+> Some options apply to multiple services, as shown by **Options common to multiple communications services** in the following tables. You must choose configuration that is suitable for all the services that you plan to support.
+
+For Microsoft Teams Direct Routing:
+
+|**Value**|**Field name(s) in Azure portal**|
+|||
+| IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to the Provisioning API, in a comma-separated list. Use of the Provisioning API is required to provision Azure Communications Gateway with numbers for Direct Routing. | **Options common to multiple communications
+| Whether to add a custom SIP header to messages entering your network by using Azure Communications Gateway's Provisioning API | **Options common to multiple communications
+| (Only if you choose to add a custom SIP header) The name of any custom SIP header | **Options common to multiple communications
+
+For Operator Connect:
+
+|**Value**|**Field name(s) in Azure portal**|
+|||
+| Whether to add a custom SIP header to messages entering your network by using Azure Communications Gateway's Provisioning API | **Options common to multiple communications
+| (Only if you choose to add a custom SIP header) The name of any custom SIP header | **Options common to multiple communications
+| (Only if you choose to add a custom SIP header) IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to the Provisioning API, in a comma-separated list. | **Options common to multiple communications
+
+For Teams Phone Mobile:
+
+|**Value**|**Field name(s) in Azure portal**|
+|||
+|The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Teams Phone Mobile: Teams voicemail pilot number**|
+| How you plan to use Mobile Control Point (MCP) to route Teams Phone Mobile calls to Microsoft Phone System. Choose from **Integrated** (to deploy MCP in Azure Communications Gateway), **On-premises** (to use an existing on-premises MCP) or **None** (if you'll use another method to route calls). |**Teams Phone Mobile: MCP**|
++
+## Collect test line and number configuration values
Collect all of the values in the following table for all the test lines that you want to configure for Azure Communications Gateway. |**Value**|**Field name(s) in Azure portal**| |||
- |The name of the test line. |**Name**|
- |The phone number of the test line, in E.164 format and including the country code. |**Phone Number**|
- |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites).|**Testing purpose**|
+ |A name for the test line. |**Name**|
+ |The phone number for the test line, in E.164 format and including the country code. |**Phone Number**|
+ |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites - Operator Connect and Teams Phone Mobile only).|**Testing purpose**|
> [!IMPORTANT]
-> You must configure at least six automated test lines. We recommend nine automated test lines (to allow simultaneous tests).
+> For Operator Connect and Teams Phone Mobile, you must configure at least six automated test lines. We recommend nine automated test lines (to allow simultaneous tests).
-## 4. Decide if you want tags
+## Decide if you want tags
Resource naming and tagging is useful for resource management. It enables your organization to locate and keep track of resources associated with specific teams or workloads and also enables you to more accurately track the consumption of cloud resources by business area and team. If you believe tagging would be useful for your organization, design your naming and tagging conventions following the information in the [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/).
-## 5. Start creating an Azure Communications Gateway resource
+## Start creating an Azure Communications Gateway resource
Use the Azure portal to create an Azure Communications Gateway resource.
Use the Azure portal to create an Azure Communications Gateway resource.
:::image type="content" source="media/deploy/create.png" alt-text="Screenshot of the Azure portal. Shows the existing Azure Communications Gateway. A Create button allows you to create more Azure Communications Gateways.":::
-1. Use the information you collected in [1. Collect basic information for deploying an Azure Communications Gateway](#1-collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration section and then select **Next: Service Regions**.
-
- :::image type="content" source="media/deploy/basics.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing the Basics section.":::
-
-1. Use the information you collected in [2. Collect Service Regions configuration values](#2-collect-service-regions-configuration-values) to fill out the fields in the **Service Regions** section and then select **Next: Tags**.
+1. Use the information you collected in [Collect basic information for deploying an Azure Communications Gateway](#collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration tab and then select **Next: Service Regions**.
+1. Use the information you collected in [Collect configuration values for service regions](#collect-configuration-values-for-service-regions) to fill out the fields in the **Service Regions** tab and then select **Next: Communications Services**.
+1. Select the communications services that you want to support in the **Communications Services** configuration tab, use the information that you collected in [Collect configuration values for each communications service](#collect-configuration-values-for-each-communications-service) to fill out the fields, and then select **Next: Test Lines**.
+1. Use the information that you collected in [Collect test line and number configuration values](#collect-test-line-and-number-configuration-values) to fill out the fields in the **Test Lines** configuration tab and then select **Next: Tags**.
1. (Optional) Configure tags for your Azure Communications Gateway resource: enter a **Name** and **Value** for each tag you want to create. 1. Select **Review + create**.
If you haven't filled in the configuration correctly, the Azure portal display a
:::image type="content" source="media/deploy/failed-validation.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing a validation that failed due to missing information in the Contacts section.":::
-## 6. Submit your Azure Communications Gateway configuration
+## Submit your Azure Communications Gateway configuration
Check your configuration and ensure it matches your requirements. If the configuration is correct, select **Create**.
Once your resource has been provisioned, a message appears saying **Your deploym
:::image type="content" source="media/deploy/go-to-resource-group.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing a completed deployment screen.":::
-## 7. Wait for provisioning to complete
+## Wait for provisioning to complete
Wait for your resource to be provisioned and connected. When your resource is ready, your onboarding team contacts you and the Provisioning Status field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field has changed. This step might take up to two weeks.
-## 8. Connect Azure Communications Gateway to your networks
+## Connect Azure Communications Gateway to your networks
When your resource has been provisioned, you can connect Azure Communications Gateway to your networks.
When your resource has been provisioned, you can connect Azure Communications Ga
1. Configure your infrastructure to meet the call routing requirements described in [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). * Depending on your network, you might need to configure SBCs, softswitches and access control lists (ACLs). * Your network needs to send SIP traffic to per-region FQDNs for Azure Communications Gateway. To find these FQDNs:
+ 1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+ 1. In the search bar at the top of the page, search for your Communications Gateway resource.
1. Go to the **Overview** page for your Azure Communications Gateway resource. 1. In each **Service Location** section, find the **Hostname** field. You need to validate TLS connections against this hostname to ensure secure connections. * We recommend configuring an SRV lookup for each region, using `_sip._tls.<regional-FQDN-from-portal>`. Replace *`<regional-FQDN-from-portal>`* with the per-region FQDNs that you found in the **Overview** page for your resource.
When your resource has been provisioned, you can connect Azure Communications Ga
- With MAPS, BFD must bring up the BGP peer for each Private Network Interface (PNI). 1. Meet any other requirements for your communications platform (for example, the *Network Connectivity Specification* for Operator Connect or Teams Phone Mobile). If you don't have access to Operator Connect or Teams Phone Mobile specifications, contact your onboarding team.
+## Configure domain delegation with Azure DNS
+
+> [!NOTE]
+> If you decided to use an automatically allocated `*.commsgw.azure.com` domain name for Azure Communications Gateway, skip this step.
+
+If you chose to delegate a subdomain when you created Azure Communications Gateway, you must update the name server (NS) records for this subdomain to point to name servers created for you in your Azure Communications Gateway deployment.
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. On the **Overview** page for your Azure Communications Gateway resource, find the four name servers that have been created for you.
+1. Note down the names of these name servers, including the trailing `.` at the end of the address.
+1. Follow [Delegate the domain](../dns/dns-delegate-domain-azure-dns.md#delegate-the-domain) and [Verify the delegation](../dns/dns-delegate-domain-azure-dns.md#verify-the-delegation) to configure all four name servers in your NS records. We recommend configuring a time-to-live (TTL) of two days.
+ ## Next steps > [!div class="nextstepaction"]
-> [Connect to Operator Connect or Teams Phone Mobile](connect-operator-connect.md)
+> [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md)
communications-gateway Emergency Calling Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling-operator-connect.md
Title: Emergency Calling with Azure Communications Gateway
-description: Understand Azure Communications Gateway's support for emergency calling
+ Title: Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway
+description: Understand Azure Communications Gateway's support for emergency calling with Operator Connect and Teams Phone Mobile
Previously updated : 01/09/2023 Last updated : 10/09/2023
Azure Communications Gateway supports Operator Connect and Teams Phone Mobile su
If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body.
-Unless you choose to route emergency calls directly to an Emergency Routing Service Provider (US only), Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP). For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-operator-connect).
+Unless you choose to route emergency calls directly to an Emergency Routing Service Provider (US only), Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP). For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing) and the considerations for [Operator Connect](/microsoftteams/considerations-operator-connect) or [Teams Phone Mobile](/microsoftteams/considerations-teams-phone-mobile).
Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from several sources, all supported by Azure Communications Gateway:
Microsoft Teams always sends location information on SIP INVITEs for emergency c
> [!NOTE] > If you are taking responsibility for assigning static locations to numbers, note that enterprise administrators must have created the locations within the Microsoft Teams Admin Center first.
-Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy the Azure Communications Gateway resource](deploy.md). These strings will also be used by Microsoft Teams to identify emergency calls.
+Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy Azure Communications Gateway](deploy.md). These strings are also used by Microsoft Teams to identify emergency calls.
## Emergency calling in the United States
communications-gateway Emergency Calling Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling-teams-direct-routing.md
+
+ Title: Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway
+description: Understand Azure Communications Gateway's support for emergency calling with Microsoft Teams Direct Routing
++++ Last updated : 10/09/2023+++
+# Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway
+
+Azure Communications Gateway supports Microsoft Teams Direct Routing subscribers making emergency calls from Microsoft Teams clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you'll need to consider.
+
+## Overview of emergency calling with Azure Communications Gateway
+
+If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body.
+
+Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to:
+
+- Ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP).
+- Configure the SIP trunks to Azure Communications Gateway in your tenant to support PIDF-LO. You typically do this when you [set up Direct Routing support](connect-teams-direct-routing.md#connect-your-tenant-to-azure-communications-gateway).
+
+For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing) and the [considerations for Direct Routing](/microsoftteams/considerations-direct-routing).
+
+## Emergency numbers and location information
+
+Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy Azure Communications Gateway](deploy.md). These strings are also used by Microsoft Teams to identify emergency calls.
+
+Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from:
+
+- [Dynamic locations](/microsoftteams/configure-dynamic-emergency-calling), based on the location of the client used to make the call.
+ - Enterprise administrators must add physical locations associated with network connectivity into the Location Information Server (LIS) in Microsoft Teams.
+ - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location.
+- Static locations that your customers assign.
+
+## Next steps
+
+- Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing).
+- Learn about [dynamic emergency calling in Microsoft Teams](/microsoftteams/configure-dynamic-emergency-calling).
communications-gateway Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md
Previously updated : 09/01/2023 Last updated : 10/09/2023 #CustomerIntent: As someone setting up Azure Communications Gateway, I want to understand the steps I need to carry out to have live traffic through my deployment. # Get started with Azure Communications Gateway
-Setting up Azure Communications Gateway requires planning your deployment, deploying your Azure Communications Gateway resource, and integrating with Operator Connect or Teams Phone Mobile.
+Setting up Azure Communications Gateway requires planning your deployment, deploying your Azure Communications Gateway resource, and integrating with your chosen communications services.
This article summarizes the steps and documentation that you need. > [!IMPORTANT] > You must fully understand the onboarding process for your chosen communications service and any dependencies introduced by the onboarding process. For advice, ask your onboarding team. >
-> Some steps in the deployment and integration process can require days or weeks to complete. For example, you might need to arrange Microsoft Azure Peering Service (MAPS) connectivity before you can deploy, wait for onboarding, or wait for a specific date to launch your service. We recommend that you read through any documentation from your onboarding team and the procedures in [2. Deploy Azure Communications Gateway](#2-deploy-azure-communications-gateway) and [3. Integrate with Operator Connect or Teams Phone Mobile](#3-integrate-with-operator-connect-or-teams-phone-mobile) before you start deploying.
+> Some steps in the deployment and integration process can require days or weeks to complete. For example, you might need to arrange Microsoft Azure Peering Service (MAPS) connectivity before you can deploy, wait for onboarding, or wait for a specific date to launch your service. We recommend that you read through any documentation from your onboarding team and the procedures in [Deploy Azure Communications Gateway](#deploy-azure-communications-gateway) and [Integrate with your chosen communications services](#integrate-with-your-chosen-communications-services) before you start deploying.
-## 1. Learn about and plan for Azure Communications Gateway
+## Learn about and plan for Azure Communications Gateway
Read the following articles to learn about Azure Communications Gateway.
Read the following articles to learn about Azure Communications Gateway.
- [Plan and manage costs for Azure Communications Gateway](plan-and-manage-costs.md), to learn about costs for Azure Communications Gateway. - [Azure Communications Gateway limits, quotas and restrictions](limits.md), to learn about the limits and quotas associated with the Azure Communications Gateway
-Read the following articles to learn about Operator Connect and Teams Phone Mobile with Azure Communications Gateway.
+For Operator Connect and Teams Phone Mobile, also read:
- [Overview of interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md) - [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md). - [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md)
+For Microsoft Teams Direct Routing, also read:
+
+- [Overview of interoperability of Azure Communications Gateway with Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md).
+- [Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway](emergency-calling-teams-direct-routing.md)
+ As part of your planning, ensure your network can support the connectivity and interoperability requirements in these articles.
-Read through the procedures in [2. Deploy Azure Communications Gateway](#2-deploy-azure-communications-gateway) and [3. Integrate with Operator Connect or Teams Phone Mobile](#3-integrate-with-operator-connect-or-teams-phone-mobile) and use those procedures as input into your planning for deployment, testing and going live. You need to work with an onboarding team (from Microsoft or one that you arrange yourself) during these phases, so ensure that you discuss timelines and requirements with this team.
+Read through the procedures in [Deploy Azure Communications Gateway](#deploy-azure-communications-gateway) and [Integrate with your chosen communications services](#integrate-with-your-chosen-communications-services) and use those procedures as input into your planning for deployment, testing and going live. You need to work with an onboarding team (from Microsoft or one that you arrange yourself) during these phases, so ensure that you discuss timelines and requirements with this team.
-## 2. Deploy Azure Communications Gateway
+## Deploy Azure Communications Gateway
Use the following procedures to deploy Azure Communications Gateway and connect it to your networks.
-1. [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) describes the steps you need to take before you can start creating your Azure Communications Gateway resource. You might need to refer to some of the articles listed in [1. Learn about and plan for Azure Communications Gateway](#1-learn-about-and-plan-for-azure-communications-gateway).
+1. [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) describes the steps you need to take before you can start creating your Azure Communications Gateway resource. You might need to refer to some of the articles listed in [Learn about and plan for Azure Communications Gateway](#learn-about-and-plan-for-azure-communications-gateway).
1. [Deploy Azure Communications Gateway](deploy.md) describes how to create your Azure Communications Gateway resource in the Azure portal and connect it to your networks.
+1. [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md) describes how to integrate with the Provisioning API. Integrating with the API is:
+ - Required for Microsoft Teams Direct Routing.
+ - Optional for Operator Connect: only required to add custom headers to messages entering your core network.
+ - Not supported for Teams Phone Mobile.
-## 3. Integrate with Operator Connect or Teams Phone Mobile
+## Integrate with your chosen communications services
Use the following procedures to integrate with Operator Connect and Teams Phone Mobile.
-1. [Connect to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile, including onboarding to the Operator Connect and Teams Phone Mobile environments.
+1. [Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile, including onboarding to the Operator Connect and Teams Phone Mobile environments.
1. [Prepare for live traffic with Operator Connect, Teams Phone Mobile and Azure Communications Gateway](prepare-for-live-traffic-operator-connect.md) describes how to complete the requirements of the Operator Connect and Teams Phone Mobile programs and launch your service.
+Use the following procedures to integrate with Microsoft Teams Direct Routing.
+
+1. [Connect Azure Communications Gateway to Microsoft Teams Direct Routing](connect-teams-direct-routing.md) describes how to connect Azure Communications Gateway to the Microsoft Phone System for Microsoft Teams Direct Routing.
+1. [Configure a test customer for Microsoft Teams Direct Routing](configure-test-customer-teams-direct-routing.md) describes how to configure Azure Communications Gateway and Microsoft 365 with a test customer.
+1. [Configure test numbers for Microsoft Teams Direct Routing](configure-test-numbers-teams-direct-routing.md) describes how to configure Azure Communications Gateway and Microsoft 365 with a test numbers.
+1. [Prepare for live traffic with Microsoft Teams Direct Routing and Azure Communications Gateway](prepare-for-live-traffic-teams-direct-routing.md) describes how to test your deployment and launch your service.
+ ## Next steps - Learn about [your network and Azure Communications Gateway](role-in-network.md)
communications-gateway Integrate With Provisioning Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/integrate-with-provisioning-api.md
+
+ Title: Get ready to use Azure Communications Gateway's Provisioning API (preview)
+description: Learn how to integrate with the Provisioning API (preview) for Azure Communications Gateway. The Provisioning API allows you to configure customers and associated numbers.
++++ Last updated : 10/09/2023++
+# Integrate with Azure Communications Gateway's Provisioning API (preview)
+
+This article explains when you need to integrate with Azure Communications Gateway's Provisioning API (preview) and provides a high-level overview of getting started. It's aimed at software developers working for telecommunications providers.
+
+The Provisioning API allows you to configure Azure Communications Gateway with the details of your customers and the numbers that you have assigned to them. It's a REST API.
+
+Whether you need to integrate with the REST API depends on your chosen communications service.
+
+|Communications service |Provisioning API integration |Purpose |
+||||
+|Microsoft Teams Direct Routing |Required |- Configure the subdomain associated with each Direct Routing customer<br>- Generate DNS records specific to each customer (as required by the Microsoft 365 environment).<br>- Indicate that numbers are enabled for Direct Routing.<br>- (Optional) Configure a custom header for messages to your network|
+|Operator Connect|Optional|(Optional) Configure a custom header for messages to your network|
+|Teams Phone Mobile|Not supported|N/A|
+
+## Prerequisites
+
+You must have completed [Deploy Azure Communications Gateway](deploy.md).
+
+You must have access to a machine with an IP address that is permitted to access the Provisioning API (preview). This allowlist of IP addresses (or ranges) was configured as part of [deploying Azure Communications Gateway](deploy.md#collect-configuration-values-for-each-communications-service).
+
+## Learn about the API and plan your BSS client changes
+
+To integrate with the Provisioning API (preview), you need to create (or update) a BSS client that can contact it. The Provisioning API supports a machine-to-machine [OAuth 2.0](/azure/active-directory/develop/v2-protocols) client credentials authentication flow. Your client authenticates and makes authorized API calls as itself, without the interaction of users.
+
+## Configure your BSS client to connect to Azure Communications Gateway
+
+The Provisioning API (preview) is available on port 443 of your Azure Communications Gateway's base domain.
+
+> [!TIP]
+> To find the base domain:
+> 1. Sign in to the Azure portal.
+> 1. Navigate to the **Overview** of your Azure Communications Gateway resource and select **Properties**.
+> 1. Find the field named **Domain**.
+
+The following steps summarize the Azure configuration you need.
+
+1. Register your BSS client in the same Azure tenant as your Azure Communications Gateway deployment. This process creates an app registration.
+1. Assign yourself as an owner for the app registration.
+1. Configure the app registration with the scopes for the API. This configuration indicates to Azure that your application is permitted to access the Provisioning API.
+1. As an administrator for the tenant, allow the application to use the app roles that you assigned.
+
+> [!NOTE]
+> For more information about the resources in the Provisioning API and the Azure configuration required, [make a support request](request-changes.md).
+
+## Next steps
+
+- [Connect to Operator Connect or Teams Phone Mobile](connect-operator-connect.md)
+- [Connect to Microsoft Teams Direct Routing](connect-teams-direct-routing.md)
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
Title: Overview of Operator Connect and Teams Phone Mobile interoperating with Azure Communications Gateway
+ Title: Overview of Operator Connect and Teams Phone Mobile with Azure Communications Gateway
description: Understand how Azure Communications Gateway fits into your fixed and mobile networks and into the Operator Connect and Teams Phone Mobile environments
Azure Communications Gateway sits at the edge of your fixed line and mobile netw
Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
-### Compliance with Certified SBC specifications
+## Compliance with Certified SBC specifications
Azure Communications Gateway supports the Microsoft specifications for Certified SBCs for Operator Connect and Teams Phone Mobile. For more information about certification and these specifications, see [Session Border Controllers certified for Direct Routing](/microsoftteams/direct-routing-border-controllers) and the Operator Connect or Teams Phone Mobile documentation provided by your Microsoft representative.
-### Call control integration for Teams Phone Mobile
+## Call control integration for Teams Phone Mobile
[Teams Phone Mobile](/microsoftteams/operator-connect-mobile-plan) allows you to offer Microsoft Teams call services for calls made from the native dialer on mobile handsets, for example presence and call history. These features require anchoring the calls in Microsoft's Intelligent Conversation and Communications Cloud (IC3), part of the Microsoft Phone System.
You can arrange more interworking function as part of your initial network desig
- Interworking away from inband DTMF tones - Placing the unique tenant ID elsewhere in SIP messages to make it easier for your network to consume, for example in `tgrp` parameters
-The Microsoft Phone System requires calling (A-) and called (B-) telephone numbers to be in E.164 format. This requirement applies to both SIP and TEL numbers. We recommend that you configure your network to use the E.164 format for all numbers. If your network can't convert numbers to the E.164 format, contact your onboarding team or raise a support request to discuss your requirements for number conversion.
[!INCLUDE [communications-gateway-multitenant](includes/communications-gateway-multitenant.md)]
For more information, see [Manage an enterprise with Azure Communications Gatewa
> [!TIP] > The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
-### Providing call duration data to Microsoft Teams
+## Providing call duration data to Microsoft Teams
Azure Communications Gateway can use the Operator Connect APIs to upload information about the duration of individual calls (CallDuration information) into the Microsoft Teams environment. This information allows Microsoft Teams clients to display the call duration recorded by your network, instead of the call duration recorded by Microsoft Teams. Providing this information to Microsoft Teams is a requirement of the Operator Connect program that Azure Communications Gateway performs on your behalf.
communications-gateway Interoperability Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-teams-direct-routing.md
+
+ Title: Overview of Microsoft Teams Direct Routing with Azure Communications Gateway
+description: Understand how Azure Communications Gateway works with Microsoft Teams Direct Routing and your fixed network
++++ Last updated : 10/09/2023+++
+# Overview of interoperability of Azure Communications Gateway with Microsoft Teams Direct Routing
+
+Azure Communications Gateway is a certified SBC for Microsoft Teams Direct Routing, allowing telecommunications operators and service providers to provide their customers with PSTN connectivity from Microsoft Teams. Azure Communications Gateway can manipulate signaling and media to meet the requirements of your networks and the Microsoft Phone System, which powers Microsoft Teams Direct Routing.
+
+In this article, you learn:
+
+- Where Azure Communications Gateway fits in your network.
+- How Azure Communications Gateway supports many customers.
+- Which signaling and media interworking features it offers.
+
+> [!IMPORTANT]
+> You must be a telecommunications operator or service provider to use Azure Communications Gateway.
+
+## Role and position in the network
+
+Azure Communications Gateway sits at the edge of your fixed line network. It connects this network to the Microsoft Phone System, allowing you to support Microsoft Teams Direct Routing. The following diagram shows where Azure Communications Gateway sits in your network.
+
+ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System and a fixed operator network over SIP and RTP. Azure Communications Gateway and the Microsoft Phone System connect multiple customers to the operator network. Azure Communications Gateway also has a provisioning API to which a BSS client in the operator's management network must connect. Azure Communications Gateway contains certified SBC function.
+
+Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
+
+## Compliance with Certified SBC specifications
+
+Azure Communications Gateway supports the Microsoft specifications for Certified SBCs for Microsoft Teams Direct Routing. For more information about certification and these specifications, see [Session Border Controllers certified for Direct Routing](/microsoftteams/direct-routing-border-controllers).
+
+Azure Communications Gateway includes multiple features that allow your network to meet the requirements for Direct Routing, including:
+
+- [Identifying the customer tenant for Microsoft Phone System](#identifying-the-customer-tenant-for-microsoft-phone-system)
+- [SIP interworking](#sip-signaling)
+- [Media interworking](#rtp-and-srtp-media)
+
+## Support for multiple customers with the Microsoft Teams multitenant model
+
+An Azure Communications Gateway deployment is designed to support Direct Routing for many tenants. Its design allows you to provide Microsoft Teams calling services to many customers, each with many users. It uses the carrier tenant and customer tenant model described in the [Microsoft Teams documentation on configuring a Session Border Controller for multiple tenants](/microsoftteams/direct-routing-sbc-multiple-tenants). In this model:
+
+- Your own configuration for Microsoft Teams is defined in your organization's tenant: the _carrier tenant_.
+- Each of your customers has its own _customer tenant_, representing the configuration for that customer.
+
+Your Azure Communications Gateway deployment always receives an FQDN (fully qualified domain name) when it's created. You use this FQDN as the _base domain_ for your carrier tenant.
+
+> [!TIP]
+> You can provide your own base domain to use with Azure Communications Gateway, or use the domain name that Azure automatically allocates. For more information, see [Topology hiding with domain delegation](#topology-hiding-with-domain-delegation).
+
+Azure Communications Gateway also receives two per-region subdomains of the base domain (one per region).
+
+Each of your customers needs _customer subdomains_ of these per-region domains. Azure Communications Gateway includes one of these subdomains in the Contact header of each message it sends to the Microsoft Phone System: the presence of the subdomain allows the Microsoft Phone System to identify the customer tenant for each message. For more information, see [Identifying the customer tenant for Microsoft Phone System](#identifying-the-customer-tenant-for-microsoft-phone-system).
+
+For each customer, you must:
+
+- Choose a suitable subdomain. The label for the subdomain must:
+ - Contain only letters, numbers, underscores and dashes.
+ - Be up to 63 characters in length.
+ - Not contain a wildcard or multiple labels separated by `.`.
+- Configure Azure Communications Gateway with this information, as part of "account" configuration available over the Provisioning API.
+- Liaise with the customer to update their tenant with the appropriate subdomain, by following the [Microsoft Teams documentation for registering subdomain names in customer tenants](/microsoftteams/direct-routing-sbc-multiple-tenants#register-a-subdomain-name-in-a-customer-tenant).
+
+As part of arranging updates to customer tenants, you must create DNS records containing a verification code (provided by Microsoft 365 when the customer updates their tenant with the domain name) on a DNS server that you control. These records allow Microsoft 365 to verify that the customer tenant is authorized to use the domain name. Azure Communications Gateway provides the DNS server that you must use. You must obtain the verification code from the customer and upload it with Azure Communications Gateway's Provisioning API to generate the DNS TXT records that verify the domain.
+
+> [!TIP]
+> For a walkthrough of setting up a customer tenant and subdomain for your testing, see [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md). When you onboard a real customer, you'll need to follow a similar process, but you'll typically need to ask them to carry out the steps that need access to their tenant.
+
+## Support for caller ID screening
+
+Microsoft Teams Direct Routing allows a customer admin to assign any phone number to a user in their tenant, even if you haven't assigned that number to them in your network. This lack of validation presents a risk of caller ID spoofing.
+
+To prevent caller ID spoofing, Azure Communications Gateway screens all Direct Routing calls originating from Microsoft Teams. This screening ensures that customers can only place calls from numbers that you have assigned to them. However, you can disable this screening on a per-customer basis, as part of "account" configuration available over the Provisioning API.
+
+The following diagram shows the call flow for an INVITE from a number that has been assigned to a customer. In this case, Azure Communications Gateway's configuration for the number also includes custom header configuration, so Azure Communications Gateway adds a custom header with the contents.
+
+ Call flow diagram showing an invite from a number assigned to a customer. Azure Communications Gateway checks its internal database to determine if the calling number is assigned to a customer. The number is assigned, so Azure Communications Gateway allows the call. The number configuration on Azure Communications Gateway includes custom header contents. Azure Communications Gateway adds the header contents as an X-MS-Operator-Content header before forwarding the call to the operator network.
+
+> [!NOTE]
+> The name of the custom header must be configured as part of [deploying Azure Communications Gateway](deploy.md#collect-configuration-values-for-each-communications-service). The name is the same for all messages. In this example, the name of the custom header is `X-MS-Operator-Content`.
+
+The following diagram shows the call flow for an INVITE from a number that hasn't been assigned to a customer. Azure Communications Gateway rejects the call with a 403.
+
+ Call flow diagram showing an invite from a number not assigned to a customer. Azure Communications Gateway checks its internal database to determine if the calling number is assigned to a customer. The number isn't assigned, so Azure Communications Gateway rejects the call with 403.
+
+## Identifying the customer tenant for Microsoft Phone System
+
+The Microsoft Phone System uses the domains in the Contact header of messages to identify the tenant for each message. Azure Communications Gateway automatically rewrites Contact headers on messages towards the Microsoft Phone System. so that they include the appropriate per-customer domain. This process removes the need for your core network to map between numbers and per-customer domains.
+
+You must provision Azure Communications Gateway with each number assigned to a customer for Direct Routing. This provisioning uses Azure Communications Gateway's Provisioning API.
+
+The following diagram shows how Azure Communications Gateway rewrites Contact headers on messages sent from the operator network to the Microsoft Phone System with Direct Routing.
+
+ Call flow diagram showing an invite for +14255550100 sent from an operator network to Azure Communications Gateway. Azure Communications Gateway uses an internal database to find the appropriate customer subdomain for the number and updates the Contact header with the subdomain. Azure Communications Gateway then routes the invite to the Microsoft Phone System.
+
+## SIP signaling
+
+Azure Communications Gateway automatically interworks calls to support requirements for Direct Routing:
+
+- Updating Contact headers to route messages correctly, as described in [Identifying the customer tenant for Microsoft Phone System](#identifying-the-customer-tenant-for-microsoft-phone-system).
+- SIP over TLS
+- X-MS-SBC header (describing the SBC function)
+- Strict rules on a= attribute lines in SDP bodies
+- Strict rules on call transfer handling
+
+These features are part of Azure Communications Gateway's [compliance with Certified SBC specifications](#compliance-with-certified-sbc-specifications) for Microsoft Teams Direct Routing.
+
+You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for:
+
+- Advanced SIP header or SDP message manipulation
+- Support for reliable provisional messages (100rel)
+- Interworking between early and late media
+- Interworking away from inband DTMF tones
+- Placing the unique tenant ID elsewhere in SIP messages to make it easier for your network to consume, for example in `tgrp` parameters
+++
+## RTP and SRTP media
+
+The Microsoft Phone System typically requires SRTP for media. Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers further media manipulation features to allow your networks to interoperate with the Microsoft Phone System.
+
+### Media handling for calls
+
+You must select the codecs that you want to support when you deploy Azure Communications Gateway. If the Microsoft Phone System doesn't support these codecs, Azure Communications Gateway can perform transcoding (converting between codecs) on your behalf.
+
+Microsoft Teams Direct Routing requires core networks to support ringback tones (ringing tones) during call transfer. Core networks must also support comfort noise. If your core networks can't meet these requirements, Azure Communications Gateway can inject media into calls.
+
+### Media interworking options
+
+Azure Communications Gateway offers multiple media interworking options. For example, you might need to:
+
+- Change handling of RTCP
+- Control bandwidth allocation
+- Prioritize specific media traffic for Quality of Service
+
+For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
+
+## Topology hiding with domain delegation
+
+The domain for your Azure Communications Gateway deployment is visible to customer administrators in their Microsoft 365 admin center. By default, each Azure Communications Gateway deployment receives an automatically generated domain name similar to `a1b2c3d4efghij5678.commsgw.azure.example.com`.
+
+To hide the details of your deployment, you can configure Azure Communications Gateway to use a subdomain of your own base domain. Customer administrators see subdomains of this domain in their Microsoft 365 admin center. This process uses [DNS delegation with Azure DNS](../dns/dns-domain-delegation.md). You must configure DNS delegation as part of deploying Azure Communications Gateway.
+
+## Next steps
+
+- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
+- Learn about [requesting changes to Azure Communications Gateway](request-changes.md).
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Title: What is Azure Communications Gateway?
-description: Azure Communications Gateway provides telecoms operators with the capabilities and network functions required to connect their network to Microsoft Teams through the Operator Connect program.
+description: Azure Communications Gateway provides telecoms operators with the capabilities and network functions required to connect their network to Microsoft Teams.
Previously updated : 09/06/2023 Last updated : 10/09/2023 # What is Azure Communications Gateway?
-Azure Communications Gateway enables Microsoft Teams calling through the Operator Connect and Teams Phone Mobile programs for your telecommunications network. Azure Communications Gateway is certified as part of the Operator Connect Accelerator program. It provides Voice and IT integration with Microsoft Teams across both fixed and mobile networks.
+Azure Communications Gateway enables Microsoft Teams calling through the Operator Connect, Teams Phone Mobile and Microsoft Teams Direct Routing programs. It provides Voice and IT integration with Microsoft Teams across both fixed and mobile networks. It's certified as part of the Operator Connect Accelerator program.
:::image type="complex" source="media/azure-communications-gateway-overview.png" alt-text="Diagram that shows Azure Communications Gateway between Microsoft Phone System and your networks. Your networks can be fixed and/or mobile."::: Diagram that shows how Azure Communications Gateway connects to the Microsoft Phone System and to your fixed and mobile networks. Microsoft Teams clients connect to the Microsoft Phone system. Your fixed network connects to PSTN endpoints. Your mobile network connects to Teams Phone Mobile users. :::image-end:::
-Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including Teams Certified SBC function) so that you can integrate with Operator Connect and Teams Phone Mobile quickly, reliably and in a secure manner.
+Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including Teams Certified SBC function) so that you can integrate with Operator Connect, Teams Phone Mobile or Microsoft Teams Direct Routing quickly, reliably and in a secure manner.
As part of Microsoft Azure, the network elements in Azure Communications Gateway are fully managed and include an availability SLA. This full management simplifies network operations integration and accelerates the timeline for adding new network functions into production.
To ensure availability, Azure Communications Gateway is deployed into two Azure
For more information about the networking requirements, see [Your network and Azure Communications Gateway](role-in-network.md) and [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
-Traffic from all enterprises shares a single SIP trunk, using a multi-tenant format. This multi-tenant format ensures the solution is suitable for both the SMB and Enterprise markets.
+Traffic from all enterprises shares a single SIP trunk, using a multitenant format. This multitenant format ensures the solution is suitable for both the SMB and Enterprise markets.
> [!IMPORTANT] > Azure Communications Gateway doesn't store/process any data outside of the Azure Regions where you deploy it.
Azure Communications Gateway's voice features include:
- **Call control integration for Teams Phone Mobile** - Azure Communications Gateway includes an optional IMS Application Server called Mobile Control Point (MCP). MCP ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md). - **Optional direct peering to Emergency Routing Service Providers for Operator Connect and Teams Phone Mobile (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md).
-## Number Management Portal for provisioning for Operator Connect and Teams Phone Mobile
+## Provisioning and API integration for Operator Connect and Teams Phone Mobile
Launching Operator Connect or Teams Phone Mobile requires you to use the Operator Connect APIs to provision subscribers (instead of the Operator Connect Portal). Azure Communications Gateway offers a Number Management Portal integrated into the Azure portal. This portal uses the Operator Connect APIs, allowing you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
The Number Management Portal is available as part of the optional API Bridge fea
> [!TIP] > The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
-## API integration
+Azure Communications Gateway also automatically integrates with Operator Connect APIs to upload call duration data to Microsoft Teams. For more information, see [Providing call duration data to Microsoft Teams](interoperability-operator-connect.md#providing-call-duration-data-to-microsoft-teams).
-Azure Communications Gateway includes API integration features. These features can help you to speed up your rollout and monetization of Teams Calling support.
+## Multitenant support and caller ID screening for Direct Routing
-These features include:
+Microsoft Teams Direct Routing's multitenant model for carrier telecommunications operators requires inbound messages to Microsoft Teams to indicate the Microsoft tenant associated with your customers. Azure Communications Gateway automatically updates the SIP signaling to indicate the correct tenant, using information that you provision onto Azure Communications Gateway. This process removes the need for your core network to map between numbers and customer tenants. For more information, see [Identifying the customer tenant for Microsoft Phone System](interoperability-teams-direct-routing.md#identifying-the-customer-tenant-for-microsoft-phone-system).
-- The Number Management Portal for provisioning (part of the optional API Bridge feature), as described in [Number Management Portal for provisioning for Operator Connect and Teams Phone Mobile](#number-management-portal-for-provisioning-for-operator-connect-and-teams-phone-mobile).-- For Operator Connect and Teams Phone Mobile programs, upload of call duration data to Microsoft Teams. For more information, see [Providing call duration data to Microsoft Teams](interoperability-operator-connect.md#providing-call-duration-data-to-microsoft-teams).
+Microsoft Teams Direct Routing allows a customer admin to assign any phone number to a user, even if you haven't assigned that number to them. This lack of validation presents a risk of caller ID spoofing. Azure Communications Gateway automatically screens all Direct Routing calls originating from Microsoft Teams. This screening ensures that customers can only place calls from numbers that you have assigned to them. However, you can disable this screening on a per-customer basis if necessary. For more information, see [Support for caller ID screening](interoperability-teams-direct-routing.md#support-for-caller-id-screening).
## Next steps
communications-gateway Plan And Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md
Previously updated : 09/06/2023 Last updated : 10/09/2023 # Plan and manage costs for Azure Communications Gateway
When you deploy Azure Communications Gateway, you're charged for how you use the
- A "Fixed Network Service Fee" or a "Mobile Network Service Fee" meter. - This meter is charged hourly and includes the use of 999 users for testing and early adoption.
+ - Operator Connect and Microsoft Teams Direct Routing are fixed networks.
+ - Teams Phone Mobile is a mobile network.
- If your deployment includes fixed networks and mobile networks, you're charged the Mobile Network Service Fee. - A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. These per-user fees are based on the maximum number of users during your billing cycle, excluding the 999 users included in the service availability fee.
For example, if you have 28,000 users assigned to the deployment each month, you
If you choose to deploy the Number Management Portal by selecting the API Bridge option, you'll also be charged for the Number Management Portal. Fees work in the same way as the other meters: a service fee meter and a per-user meter. The number of users charged for the Number Management Portal is always the same as the number of users charged on the other Azure Communications Gateway meters. > [!NOTE]
-> A user is any telephone number that meets all the following criteria.
+> A Microsoft Teams Direct Routing user is any telephone number configured with Direct Routing on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number.
+>
+> An Operator Connect or Teams Phone Mobile user is any telephone number that meets all the following criteria.
> > - You have provisioned the number within your Operator Connect or Teams Phone Mobile environment. > - The number is configured for connectivity through Azure Communications Gateway.
communications-gateway Prepare For Live Traffic Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-operator-connect.md
Title: Prepare for live traffic with Azure Communications Gateway
+ Title: Prepare for Operator Connect or Teams Phone Mobile live traffic with Azure Communications Gateway
description: After deploying Azure Communications Gateway, you and your onboarding team must carry out further integration work before you can launch your Teams Phone Mobile or Operator Connect service.
In this article, you learn about the steps you and your onboarding team must tak
|Configuration portal |Required permissions | |||
- |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#1-add-the-project-synergy-application-to-your-azure-tenancy))|
+ |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#add-the-project-synergy-application-to-your-azure-tenancy))|
|[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management|
In this article, you learn about the steps you and your onboarding team must tak
In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions.
-## 1. Ask your onboarding team to register your test enterprise tenant
+## Ask your onboarding team to register your test enterprise tenant
Your onboarding team must register the test enterprise tenant that you chose in [Prerequisites](#prerequisites) with Microsoft Teams.
Your onboarding team must register the test enterprise tenant that you chose in
- The ID of the tenant to use for testing. 1. Wait for your onboarding team to confirm that your test tenant has been registered.
-## 2. Assign numbers to test users in your tenant
+## Assign numbers to test users in your tenant
1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `commsgw`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process. 1. In your test tenant, request service from your company.
Your onboarding team must register the test enterprise tenant that you chose in
1. Assign the number to a user. 1. Repeat for all your test users.
-## 3. Carry out integration testing and request changes
+## Carry out integration testing and request changes
Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh.
You must test typical call flows for your network. Your onboarding team will pro
- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft will make the changes for you. - If you need changes to the configuration of devices in your core network, you must make those changes.
-## 4. Run a connectivity test and upload proof
+## Run a connectivity test and upload proof
Before you can launch, Microsoft Teams requires proof that your network is properly connected to Microsoft's network.
-1. Provide your onboarding team with proof that BFD is enabled. You should have enabled BFD in [8. Connect Azure Communications Gateway to your networks](deploy.md#8-connect-azure-communications-gateway-to-your-networks) when you deployed Azure Communications Gateway. For example, if you have a Cisco router, you can provide configuration similar to the following.
+1. Provide your onboarding team with proof that BFD is enabled. You should have enabled BFD when you [connected Azure Communications Gateway to your networks](deploy.md#connect-azure-communications-gateway-to-your-networks) as part of deploying. For example, if you have a Cisco router, you can provide configuration similar to the following.
```text interface TenGigabitEthernet2/0/0.150
Before you can launch, Microsoft Teams requires proof that your network is prope
1. Test failover of the connectivity to your network. Your onboarding team will work with you to plan this testing and gather the required evidence. 1. Work with your onboarding team to validate emergency call handling.
-## 5. Get your go-to-market resources approved
+## Get your go-to-market resources approved
Before you can go live, you must get your customer-facing materials approved by Microsoft Teams. Provide the following to your onboarding team for review.
Before you can go live, you must get your customer-facing materials approved by
- Logo for the Microsoft Teams Operator Directory (200 px by 200 px) - Logo for the Microsoft Teams Admin Center (170 px by 90 px)
-## 6. Test raising a ticket
+## Test raising a ticket
You must test that you can raise tickets in the Azure portal to report problems with Azure Communications Gateway. See [Get support or request changes for Azure Communications Gateway](request-changes.md).
-## 7. Learn about monitoring Azure Communications Gateway
+## Learn about monitoring Azure Communications Gateway
Your staff can use a selection of key metrics to monitor Azure Communications Gateway. These metrics are available to anyone with the Reader role on the subscription for Azure Communications Gateway. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
-## 8. Verify API integration
+## Verify API integration
Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect API for provisioning.
If you don't have the Number Management Portal, you must provide your onboarding
-## 9. Arrange synthetic testing
+## Arrange synthetic testing
Your onboarding team must arrange synthetic testing of your deployment. This synthetic testing is a series of automated tests lasting at least seven days. It verifies the most important metrics for quality of service and availability. After launch, synthetic traffic will be sent through your deployment using your test numbers. This traffic is used to continuously check the health of your deployment.
-## 10. Schedule launch
+## Schedule launch
Your launch date is the date that you'll appear to enterprises in the Teams Admin Center. Your onboarding team must arrange this date by making a request to Microsoft Teams.
communications-gateway Prepare For Live Traffic Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-teams-direct-routing.md
+
+ Title: Prepare for Microsoft Teams Direct Routing live traffic with Azure Communications Gateway
+description: After deploying Azure Communications Gateway, you and your onboarding team must carry out further integration work before you can launch your Microsoft Teams Direct Routing service.
++++ Last updated : 10/09/2023++
+# Prepare for live traffic with Microsoft Teams Direct Routing and Azure Communications Gateway
+
+Before you can launch your Operator Connect or Teams Phone Mobile service, you and your onboarding team must:
+
+- Test your service.
+- Prepare for launch.
+
+In this article, you learn about the steps you and your onboarding team must take.
+
+> [!TIP]
+> In many cases, your onboarding team is from Microsoft, provided through the [Included Benefits](onboarding.md) or through a separate arrangement.
+
+> [!IMPORTANT]
+> Some steps can require days or weeks to complete. We recommend that you read through these steps in advance to work out a timeline.
+
+## Prerequisites
+
+You must have completed the following procedures.
+
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md)
+- [Deploy Azure Communications Gateway](deploy.md)
+- [Connect Azure Communications Gateway to Microsoft Teams Direct Routing](connect-teams-direct-routing.md)
+- [Configure a test customer for Microsoft Teams Direct Routing](configure-test-customer-teams-direct-routing.md)
+- [Configure test numbers for Microsoft Teams Direct Routing](configure-test-numbers-teams-direct-routing.md)
+
+## Carry out integration testing and request changes
+
+Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh.
+
+You must test typical call flows for your network. Your onboarding team will provide an example test plan that we recommend you follow. Your test plan should include call flow, failover, and connectivity testing.
+
+- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft must make the changes for you.
+- If you need changes to the configuration of devices in your core network, you must make those changes.
+
+## Test raising a ticket
+
+You must test that you can raise tickets in the Azure portal to report problems with Azure Communications Gateway. See [Get support or request changes for Azure Communications Gateway](request-changes.md).
+
+## Learn about monitoring Azure Communications Gateway
+
+Your staff can use a selection of key metrics to monitor Azure Communications Gateway. These metrics are available to anyone with the Reader role on the subscription for Azure Communications Gateway. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
+
+## Next steps
+
+- Learn about [getting support and requesting changes for Azure Communications Gateway](request-changes.md).
+- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
Previously updated : 05/05/2023 Last updated : 10/09/2023 # Prepare to deploy Azure Communications Gateway
The following sections describe the information you need to collect and the deci
## Prerequisites [!INCLUDE [communications-gateway-deployment-prerequisites](includes/communications-gateway-deployment-prerequisites.md)]
-## 1. Arrange onboarding
+## Arrange onboarding
-For Operator Connect and Teams Phone Mobile, you need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Included Benefits](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself.
+You need an onboarding partner to deploy Azure Communications Gateway. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Included Benefits](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself.
-## 2. Ensure you have a suitable support plan
+## Ensure you have a suitable support plan
We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
-## 3. Choose the Azure tenant to use
+## Choose the Azure tenant to use
-The Operator Connect and Teams Phone Mobile programs require your Azure Active Directory tenant to contain a Microsoft application called Project Synergy. Operator Connect and Teams Phone Mobile inherit permissions and identities from your Azure Active Directory tenant through the Project Synergy application. The Project Synergy application also allows configuration of Operator Connect or Teams Phone Mobile and assigning users and groups to specific roles.
+We recommend that you use an existing Azure Active Directory tenant for Azure Communications Gateway, because using an existing tenant uses your existing identities for fully integrated authentication. If you need to manage identities separately from the rest of your organization, create a new dedicated tenant first.
-We recommend that you use an existing Azure Active Directory tenant for Azure Communications Gateway, because using an existing tenant uses your existing identities for fully integrated authentication. However, if you need to manage identities for Operator Connect separately from the rest of your organization, create a new dedicated tenant first.
+The Operator Connect and Teams Phone Mobile environments inherit identities and configuration permissions from your Azure Active Directory tenant through a Microsoft application called Project Synergy. You must add this application to your Azure Active Directory tenant as part of [Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) (if your tenant does not already contain this application).
-## 4. Get access to Azure Communications Gateway for your Azure subscription
+## Get access to Azure Communications Gateway for your Azure subscription
Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details. Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
-## 5. Create a network design
+## Create a network design
-You must use Microsoft Azure Peering Service (MAPS) or ExpressRoute Microsoft Peering to connect your on-premises network to Azure Communications Gateway.
+Connectivity between your networks and Azure Communications Gateway must meet any relevant network connectivity specifications.
[!INCLUDE [communications-gateway-maps-or-expressroute](includes/communications-gateway-maps-or-expressroute.md)] If you want to use ExpressRoute Microsoft Peering, consult with your onboarding team and ensure that it's available in your region.
-Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* that you've been issued. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
+Ensure your network is set up as shown in the following diagram and has been configured in accordance with any network connectivity specifications that you've been issued for your chosen communications services. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
:::image type="content" source="media/azure-communications-gateway-redundancy.png" alt-text="Network diagram of an Azure Communications Gateway that uses MAPS as its peering service between Azure and an operators network.":::
+You must decide whether you want Azure Communications Gateway to have an autogenerated `*.commsgw.azure.com` domain name or a subdomain of a domain you already own, using [domain delegation with Azure DNS](../dns/dns-domain-delegation.md). Domain delegation provides topology hiding and might increase customer trust, but requires giving us full control over the subdomain that you delegate. For Microsoft Teams Direct Routing, choose domain delegation if you don't want customers to see an `*.commsgw.azure.com` in their Microsoft 365 admin centers.
+ For Teams Phone Mobile, you must decide how your network should determine whether a call involves a Teams Phone Mobile subscriber and therefore route the call to Microsoft Phone System. You can: - Use Azure Communications Gateway's integrated Mobile Control Point (MCP).
For Teams Phone Mobile, you must decide how your network should determine whethe
For more information on these options, see [Call control integration for Teams Phone Mobile](interoperability-operator-connect.md#call-control-integration-for-teams-phone-mobile) and [Mobile Control Point in Azure Communications Gateway](mobile-control-point.md).
-If you plan to route emergency calls through Azure Communications Gateway, read [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md) to learn about your options.
+If you plan to route emergency calls through Azure Communications Gateway for Operator Connect or Teams Phone Mobile, read [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md) to learn about your options.
-## 6. Configure MAPS or ExpressRoute
+## Configure MAPS or ExpressRoute
-Connect your network to Azure Communications Gateway:
+Connect your network to Azure:
- To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md). - To configure ExpressRoute Microsoft Peering, follow the instructions in [Tutorial: Configure peering for ExpressRoute circuit](../../articles/expressroute/expressroute-howto-routing-portal-resource-manager.md).
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
- subject-reliability - references_regions Previously updated : 05/11/2023 Last updated : 10/09/2023 # Reliability in Azure Communications Gateway
communications-gateway Role In Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/role-in-network.md
Previously updated : 09/11/2023 Last updated : 10/09/2023
Azure Communications Gateway sits at the edge of your network. This position allows it to manipulate signaling and media to meet the requirements of your networks and your chosen communications services. Azure Communications Gateway includes many interoperability settings by default, and you can arrange further interoperability configuration. > [!TIP]
-> This section provides a brief overview of Azure Communications Gateway's interoperability features. For detailed information about interoperability with Operator Connect and Teams Phone Mobile, see [Interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile for Microsoft Teams](interoperability-operator-connect.md).
+> This section provides a brief overview of Azure Communications Gateway's interoperability features. For detailed information about interoperability with a specific communications service, see:
+> - [Interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md).
+> - [Interoperability of Azure Communications Gateway with Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md).
## Role and position in the network
We expect your network to have two geographically redundant sites. You must prov
* The other site in your deployment, as cross-connects. * The two Azure Regions in which you deploy Azure Communications Gateway.
-Connectivity between your networks and Azure Communications Gateway must meet the Microsoft Teams _Network Connectivity Specification_.
+Connectivity between your networks and Azure Communications Gateway must meet any relevant network connectivity specifications.
[!INCLUDE [communications-gateway-maps-or-expressroute](includes/communications-gateway-maps-or-expressroute.md)]
For full details of the media interworking features available in Azure Communica
## Next steps -- Learn about [Interoperability for Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md)
+- Learn about [interoperability for Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md)
+- Learn about [interoperability for Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md)
- Learn about [onboarding and Inclusive Benefits](onboarding.md) - Learn about [planning an Azure Communications Gateway deployment](get-started.md)
communications-gateway Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md
Previously updated : 07/18/2023 Last updated : 10/09/2023
The customer data Azure Communications Gateway handles can be split into: - Content data, such as media for voice calls.-- Customer data present in call metadata.
+- Customer data provisioned on Azure Communications Gateway or present in call metadata.
## Data retention, data security and encryption at rest
-Azure Communications Gateway doesn't store content data, but it does store customer data and provide statistics based on it. This data is stored for a maximum of 30 days. After this period, it's no longer accessible to perform diagnostics or analysis of individual calls. Anonymized statistics and logs produced based on customer data are available after the 30 days limit.
+Azure Communications Gateway doesn't store content data, but it does store customer data.
+
+- Customer data provisioned on Azure Communications Gateway includes configuration of numbers for specific communications services. It's needed to match numbers to these communications services and (optionally) make number-specific changes to calls, such as adding custom headers.
+- Temporary customer data from call metadata is stored for a maximum of 30 days and used to provide statistics. After 30 days, data from call metadata is no longer accessible to perform diagnostics or analysis of individual calls. Anonymized statistics and logs produced based on customer data are available after the 30 days limit.
Azure Communications Gateway doesn't support [Customer Lockbox for Microsoft Azure](../security/fundamentals/customer-lockbox-overview.md). However Microsoft engineers can only access data on a just-in-time basis, and only for diagnostic purposes.
-Azure Communications Gateway stores all data at rest securely, including any customer data that has to be temporarily stored, such as call records. It uses standard Azure infrastructure, with platform-managed encryption keys, to provide server-side encryption compliant with a range of security standards including FedRAMP. For more information, see [encryption of data at rest](../security/fundamentals/encryption-overview.md).
+Azure Communications Gateway stores all data at rest securely, including provisioned customer and number configuration and any temporary customer data, such as call records. Azure Communications Gateway uses standard Azure infrastructure, with platform-managed encryption keys, to provide server-side encryption compliant with a range of security standards including FedRAMP. For more information, see [encryption of data at rest](../security/fundamentals/encryption-overview.md).
## Encryption in transit
When encrypting traffic to send to your network, Azure Communications Gateway pr
Azure Communications Gateway uses mutual TLS for SIP, meaning that both the client and the server for the connection verify each other.
-You must manage the certificates that your network presents to Azure Communications Gateway. By default, Azure Communications Gateway supports the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, you must provide this certificate to your onboarding team when you [connect Azure Communications Gateway to your networks](deploy.md#8-connect-azure-communications-gateway-to-your-networks).
+You must manage the certificates that your network presents to Azure Communications Gateway. By default, Azure Communications Gateway supports the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, you must provide this certificate to your onboarding team when you [connect Azure Communications Gateway to your networks](deploy.md#connect-azure-communications-gateway-to-your-networks).
-We manage the certificate that Azure Communications Gateway uses to connect to your network and Microsoft Phone System. Azure Communications Gateway's certificate uses the DigiCert Global Root G2 certificate as the root CA certificate. If your network doesn't already support this certificate as a root CA certificate, you must download and install this certificate when you [connect Azure Communications Gateway to your networks](deploy.md#8-connect-azure-communications-gateway-to-your-networks).
+We manage the certificate that Azure Communications Gateway uses to connect to your network and Microsoft Phone System. Azure Communications Gateway's certificate uses the DigiCert Global Root G2 certificate as the root CA certificate. If your network doesn't already support this certificate as a root CA certificate, you must download and install this certificate when you [connect Azure Communications Gateway to your networks](deploy.md#connect-azure-communications-gateway-to-your-networks).
### Cipher suites for SIP and RTP
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Title: What's new in Azure Communications Gateway?
-description: Discover what's new in Azure Communications Gateway
+description: Discover what's new in Azure Communications Gateway for Operator Connect, Teams Phone Mobile and Microsoft Teams Direct Routing. Learn how to get started with the latest features.
Last updated 09/06/2023
This article covers new features and improvements for Azure Communications Gateway.
+## October 2023
+
+### Support for multitenant Microsoft Teams Direct Routing
+
+From October 2023, Azure Communications Gateway supports providing PSTN connectivity to Microsoft Teams through Direct Routing. You can provide Microsoft Teams calling services to many customers, each with many users, with minimal disruption to your existing network. Azure Communications Gateway automatically updates the SIP signaling to indicate the correct tenant, without needing changes to your core network to map between numbers and customer tenants.
+
+Azure Communications Gateway can screen Direct Routing calls originating from Microsoft Teams to ensure that the number is enabled for Direct Routing. This screening reduces the risk of caller ID spoofing, because it prevents customer administrators assigning numbers that you haven't allocated to the customer.
+
+For more information about Direct Routing with Azure Communications Gateway, see [Overview of interoperability of Azure Communications Gateway with Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md). For an overview of deploying and configuring Azure Communications Gateway for Direct Routing, see [Get started with Azure Communications Gateway](get-started.md).
+ ## September 2023 ### ExpressRoute Microsoft Peering between Azure and operator networks
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
ms.suite: integration
Previously updated : 09/02/2022 Last updated : 10/08/2023 # Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps
The Recurrence trigger is part of the built-in Schedule connector and runs nativ
* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Basic knowledge about [logic app workflows](../logic-apps/logic-apps-overview.md).
+* A [Consumption or Standard logic app resource](../logic-apps/logic-apps-overview.md#resource-environment-differences) with a blank workflow.
+
+ > [!NOTE]
+ >
+ > If you created a Standard logic app workflow, make sure to create a *stateful* workflow.
+ > The Recurrence trigger is currently unavailable for stateless workflows.
<a name="add-recurrence-trigger"></a> ## Add the Recurrence trigger
-1. In the [Azure portal](https://portal.azure.com), create a blank logic app and workflow.
-
- > [!NOTE]
- >
- > If you created a Standard logic app workflow, make sure to create a *stateful* workflow.
- > The Recurrence trigger is currently unavailable for stateless workflows.
-
-1. In the designer, follow the corresponding steps, based on whether your logic app workflow is [Consumption or Standard](../logic-apps/logic-apps-overview.md#resource-environment-differences).
+Based on whether your workflow is [Consumption or Standard](../logic-apps/logic-apps-overview.md#resource-environment-differences), follow the corresponding steps:
### [Consumption](#tab/consumption)
- 1. On the designer, under the search box, select **Built-in**.
- 1. In the search box, enter **recurrence**.
- 1. From the triggers list, select the trigger named **Recurrence**.
-
- ![Screenshot for Consumption logic app workflow designer with "Recurrence" trigger selected.](./media/connectors-native-recurrence/add-recurrence-trigger-consumption.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and blank workflow.
-### [Standard](#tab/standard)
-
- 1. On the designer, select **Choose operation**.
- 1. On the **Add a trigger** pane, under the search box, select **Built-in**.
- 1. In the search box, enter **recurrence**.
- 1. From the triggers list, select the trigger named **Recurrence**.
-
- ![Screenshot for Standard logic app workflow designer with "Recurrence" trigger selected.](./media/connectors-native-recurrence/add-recurrence-trigger-standard.png)
--
+1. [Follow these general steps to add the **Schedule** built-in trigger named **Recurrence**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
1. Set the interval and frequency for the recurrence. In this example, set these properties to run your workflow every week, for example:
- **Consumption**
-
- ![Screenshot for Consumption workflow designer with "Recurrence" trigger interval and frequency.](./media/connectors-native-recurrence/recurrence-trigger-details-consumption.png)
-
- **Standard**
-
- ![Screenshot for Standard workflow designer with "Recurrence" trigger interval and frequency.](./media/connectors-native-recurrence/recurrence-trigger-details-standard.png)
+ ![Screenshot for Consumption workflow designer with Recurrence trigger interval and frequency.](./media/connectors-native-recurrence/recurrence-trigger-details-consumption.png)
| Property | JSON name | Required | Type | Description | |-|--|-||-| | **Interval** | `interval` | Yes | Integer | A positive integer that describes how often the workflow runs based on the frequency. Here are the minimum and maximum intervals: <br><br>- Month: 1-16 months <br>- Week: 1-71 weeks <br>- Day: 1-500 days <br>- Hour: 1-12,000 hours <br>- Minute: 1-72,000 minutes <br>- Second: 1-9,999,999 seconds<br><br>For example, if the interval is 6, and the frequency is "Month", then the recurrence is every 6 months. |
- | **Frequency** | `frequency` | Yes | String | The unit of time for the recurrence: **Second**, **Minute**, **Hour**, **Day**, **Week**, or **Month** |
- ||||||
+ | **Frequency** | `frequency` | Yes | String | The unit of time for the recurrence: **Second**, **Minute**, **Hour**, **Day**, **Week**, or **Month** <br><br>**Important**: If you select the **Day**, **Week**, or **Month** frequency, and you specify a future start date and time, make sure that you set up the recurrence in advance. Otherwise, the workflow might skip the first recurrence. <br><br>- **Day**: Set up the daily recurrence at least 24 hours in advance. <br><br>- **Week**: Set up the weekly recurrence at least 7 days in advance. <br><br>- **Month**: Set up the monthly recurrence at least one month in advance. |
- > [!IMPORTANT]
- > If you use the **Day**, **Week**, or **Month** frequency, and you specify a future date and time,
- > make sure that you set up the recurrence in advance. Otherwise, the workflow might skip the first recurrence.
- >
- > * **Day**: Set up the daily recurrence at least 24 hours in advance.
- >
- > * **Week**: Set up the weekly recurrence at least 7 days in advance.
- >
- > * **Month**: Set up the monthly recurrence at least one month in advance.
- >
- > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time),
- > the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior,
- > provide a start date and time for when you want the first recurrence to run.
- >
- > If you deploy a disabled Consumption workflow that has a Recurrence trigger using an ARM template, the trigger
- > instantly fires when you enable the workflow unless you set the **Start time** parameter before deployment.
- >
- > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences,
- > those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to
- > factors such as latency during storage calls. To make sure that your logic app doesn't miss a recurrence, especially when
- > the frequency is in days or longer, try the following options:
- >
- > * Provide a start date and time for the recurrence and the specific times to run subsequent recurrences. You can use the
- > properties named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies.
- >
- > * For Consumption logic app workflows, use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md),
- > rather than the Recurrence trigger.
+1. Review the following considerations when you use the **Recurrence** trigger:
-1. To set advanced scheduling options, open the **Add new parameter** list. Any options that you select appear on the trigger after selection.
+ * If you don't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time), the first recurrence runs immediately when you save the workflow or deploy the logic app resource, despite your trigger's recurrence setup. To avoid this behavior, provide a start date and time for when you want the first recurrence to run.
- **Consumption**
+ * If you don't specify any other advanced scheduling options, such as specific times to run future recurrences, those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
- ![Screenshot for Consumption workflow designer and "Recurrence" trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-consumption.png)
+ * To make sure that your workflow doesn't miss a recurrence, especially when the frequency is in days or longer, try the following options:
+
+ * Provide a start date and time for the recurrence and the specific times to run subsequent recurrences. You can use the properties named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies.
- **Standard**
+ * For Consumption logic app workflows, use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md), rather than the Recurrence trigger.
- ![Screenshot for Standard workflow designer and "Recurrence" trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-standard.png)
+ * If you deploy a disabled Consumption workflow that has a Recurrence trigger using an ARM template, the trigger instantly fires when you enable the workflow unless you set the **Start time** parameter before deployment.
+
+1. To set advanced scheduling options, open the **Add new parameter** list. Any options that you select appear on the trigger after selection.
| Property | JSON name | Required | Type | Description | |-|--|-||-|
The Recurrence trigger is part of the built-in Schedule connector and runs nativ
| **On these days** | `weekDays` | No | String or string array | If you select "Week", you can select one or more days when you want to run the workflow: **Monday**, **Tuesday**, **Wednesday**, **Thursday**, **Friday**, **Saturday**, and **Sunday** | | **At these hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. <br><br>For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day, but the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. | | **At these minutes** | `minutes` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 59 as the minutes of the hour when you want to run the workflow. <br><br>For example, you can specify "30" as the minute mark and using the previous example for hours of the day, you get 10:30 AM, 12:30 PM, and 2:30 PM. <br><br>**Note**: Sometimes, the timestamp for the triggered run might vary up to 1 minute from the scheduled time. If you need to pass the timestamp exactly as scheduled to subsequent actions, you can use template expressions to change the timestamp accordingly. For more information, see [Date and time functions for expressions](../logic-apps/workflow-definition-language-functions-reference.md#date-time-functions). |
- |||||
+
+ ![Screenshot for Consumption workflow designer and Recurrence trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-consumption.png)
For example, suppose that today is Friday, September 4, 2020. The following Recurrence trigger doesn't fire *any sooner* than the specified start date and time, which is Friday, September 18, 2020 at 8:00 AM Pacific Time. However, the recurrence schedule is set for 10:30 AM, 12:30 PM, and 2:30 PM on Mondays only. The first time that the trigger fires and creates a workflow instance is on Monday at 10:30 AM. To learn more about how start times work, see these [start time examples](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time). Future runs happen at 12:30 PM and 2:30 PM on the same day. Each recurrence creates their own workflow instance. After that, the entire schedule repeats all over again next Monday. [*What are some other example occurrences?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#example-recurrences)
+ ![Screenshot showing Consumption workflow and Recurrence trigger with advanced scheduling example.](./media/connectors-native-recurrence/recurrence-trigger-advanced-example-consumption.png)
+ > [!NOTE] >
- > The trigger shows a preview for your specified recurrence only when you select "Day" or "Week" as the frequency.
+ > The trigger shows a preview for your specified recurrence only when you select **Day** or **Week** as the frequency.
- **Consumption**
+1. Now continue building your workflow with other actions.
- ![Screenshot showing Consumption workflow and "Recurrence" trigger with advanced scheduling example.](./media/connectors-native-recurrence/recurrence-trigger-advanced-example-consumption.png)
+### [Standard](#tab/standard)
- **Standard**
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and blank workflow.
- ![Screenshot showing Standard workflow and "Recurrence" trigger with advanced scheduling example.](./media/connectors-native-recurrence/recurrence-trigger-advanced-example-standard.png)
+1. [Follow these general steps to add the **Schedule** built-in trigger named **Recurrence**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+1. Set the interval and frequency for the recurrence. In this example, set these properties to run your workflow every week, for example:
+
+ ![Screenshot for Standard workflow designer with Recurrence trigger interval and frequency.](./media/connectors-native-recurrence/recurrence-trigger-details-standard.png)
+
+ | Property | JSON name | Required | Type | Description |
+ |-|--|-||-|
+ | **Interval** | `interval` | Yes | Integer | A positive integer that describes how often the workflow runs based on the frequency. Here are the minimum and maximum intervals: <br><br>- Month: 1-16 months <br>- Week: 1-71 weeks <br>- Day: 1-500 days <br>- Hour: 1-12,000 hours <br>- Minute: 1-72,000 minutes <br>- Second: 1-9,999,999 seconds<br><br>For example, if the interval is 6, and the frequency is "Month", then the recurrence is every 6 months. |
+ | **Frequency** | `frequency` | Yes | String | The unit of time for the recurrence: **Second**, **Minute**, **Hour**, **Day**, **Week**, or **Month** <br><br>**Important**: If you select the **Day**, **Week**, or **Month** frequency, and you specify a future start date and time, make sure that you set up the recurrence in advance. Otherwise, the workflow might skip the first recurrence. <br><br>- **Day**: Set up the daily recurrence at least 24 hours in advance. <br><br>- **Week**: Set up the weekly recurrence at least 7 days in advance. <br><br>- **Month**: Set up the monthly recurrence at least one month in advance. |
+ | **Time Zone** | `timeZone` | No | String | Applies only when you specify a start time because this trigger doesn't accept [UTC offset](https://en.wikipedia.org/wiki/UTC_offset). Select the time zone that you want to apply. |
+ | **Start Time** | `startTime` | No | String | Provide a start date and time, which has a maximum of 49 years in the future and must follow the [ISO 8601 date time specification](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) in [UTC date time format](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), but without a [UTC offset](https://en.wikipedia.org/wiki/UTC_offset): <br><br>YYYY-MM-DDThh:mm:ss if you select a time zone <br><br>-or- <br><br>YYYY-MM-DDThh:mm:ssZ if you don't select a time zone <br><br>So for example, if you want September 18, 2020 at 2:00 PM, then specify "2020-09-18T14:00:00" and select a time zone such as Pacific Standard Time. Or, specify "2020-09-18T14:00:00Z" without a time zone. <br><br>**Important:** If you don't select a time zone, you must add the letter "Z" at the end without any spaces. This "Z" refers to the equivalent [nautical time](https://en.wikipedia.org/wiki/Nautical_time). If you select a time zone value, you don't need to add a "Z" to the end of your **Start time** value. If you do, Logic Apps ignores the time zone value because the "Z" signifies a UTC time format. <br><br>For simple schedules, the start time is the first occurrence, while for complex schedules, the trigger doesn't fire any sooner than the start time. [*What are the ways that I can use the start date and time?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time) |
+ | **On These Days** | `weekDays` | No | String or string array | If you select "Week", you can select one or more days when you want to run the workflow: **Monday**, **Tuesday**, **Wednesday**, **Thursday**, **Friday**, **Saturday**, and **Sunday** |
+ | **At These Hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. <br><br>For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day, but the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. |
+ | **At These Minutes** | `minutes` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 59 as the minutes of the hour when you want to run the workflow. <br><br>For example, you can specify "30" as the minute mark and using the previous example for hours of the day, you get 10:30 AM, 12:30 PM, and 2:30 PM. <br><br>**Note**: Sometimes, the timestamp for the triggered run might vary up to 1 minute from the scheduled time. If you need to pass the timestamp exactly as scheduled to subsequent actions, you can use template expressions to change the timestamp accordingly. For more information, see [Date and time functions for expressions](../logic-apps/workflow-definition-language-functions-reference.md#date-time-functions). |
+
+ ![Screenshot for Standard workflow designer and Recurrence trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-standard.png)
+
+ > [!NOTE]
+ >
+ > The trigger shows a preview for your specified recurrence only when you select **Day** or **Week** as the frequency.
+
+1. Review the following considerations when you use the **Recurrence** trigger:
+
+ * If you don't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time), the first recurrence runs immediately when you save the workflow or deploy the logic app resource, despite your trigger's recurrence setup. To avoid this behavior, provide a start date and time for when you want the first recurrence to run.
+
+ * If you don't specify any other advanced scheduling options, such as specific times to run future recurrences, those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
+
+ * To make sure that your workflow doesn't miss a recurrence, especially when the frequency is in days or longer, try providing a start date and time for the recurrence and the specific times to run subsequent recurrences. You can use the properties named **At These hours** and **At These minutes**, which are available only for the **Day** and **Week** frequencies.
+
+ For example, suppose that today is Friday, September 4, 2020. The following Recurrence trigger doesn't fire *any sooner* than the specified start date and time, which is Friday, September 18, 2020 at 8:00 AM Pacific Time. However, the recurrence schedule is set for 10:30 AM, 12:30 PM, and 2:30 PM on Mondays only. The first time that the trigger fires and creates a workflow instance is on Monday at 10:30 AM. To learn more about how start times work, see these [start time examples](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time).
+
+ Future runs happen at 12:30 PM and 2:30 PM on the same day. Each recurrence creates their own workflow instance. After that, the entire schedule repeats all over again next Monday. [*What are some other example occurrences?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#example-recurrences)
+
+ ![Screenshot showing Standard workflow and Recurrence trigger with advanced scheduling example.](./media/connectors-native-recurrence/recurrence-trigger-advanced-example-standard.png)
1. Now continue building your workflow with other actions. ++ ## Workflow definition - Recurrence You can view how the [Recurrence trigger definition](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger) appears with your chosen options by reviewing the underlying JSON definition for your workflow in Consumption logic apps and Standard logic apps (stateful only).
Otherwise, if you don't select a time zone, daylight saving time (DST) events mi
* [Pause workflows with delay actions](../connectors/connectors-native-delay.md) * [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
-* [Built-in connectors for Azure Logic Apps](built-in.md)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
container-registry Container Registry Firewall Access Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md
To pull or push images or other artifacts to an Azure container registry, a clie
* **Registry REST API endpoint** - Authentication and registry management operations are handled through the registry's public REST API endpoint. This endpoint is the login server name of the registry. Example: `myregistry.azurecr.io`
-* **Registry REST API endpoint for certificates** - Azure container registry uses a wildcard SSL certificate for all subdomains. When connecting to the Azure container registry using SSL, the client must be able to download the certificate for the TLS handshake. In such cases, `azurecr.io` must also be accessible.
+ * **Registry REST API endpoint for certificates** - Azure container registry uses a wildcard SSL certificate for all subdomains. When connecting to the Azure container registry using SSL, the client must be able to download the certificate for the TLS handshake. In such cases, `azurecr.io` must also be accessible.
* **Storage (data) endpoint** - Azure [allocates blob storage](container-registry-storage.md) in Azure Storage accounts on behalf of each registry to manage the data for container images and other artifacts. When a client accesses image layers in an Azure container registry, it makes requests using a storage account endpoint provided by the registry.
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
To verify the container image, add the root certificate that signs the leaf cert
## Next steps
-See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/quickstarts/ratify-on-azure.md).
+See [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli) and [Ratify on Azure](https://ratify.dev/docs/1.0/quickstarts/ratify-on-azure/) to get started into verifying and auditing signed images before deploying them on AKS.
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
cost-management-billing Create Free Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-free-services.md
Previously updated : 12/07/2022 Last updated : 10/02/2022
During the first 30 days after you've created an Azure free account, you have $2
If you donΓÇÖt use all of your credit by the end of the first 30 days, it's lost. After the first 30 days and up to 12 months after sign-up, you can only use a limited quantity of *some services*ΓÇönot all Azure services are free. If you upgrade before 30 days and have remaining credit, you can use the rest of your credit with a pay-as-you-go subscription for the remaining days. For example, if you sign up for the free account on November 1 and upgrade on November 5, you have until November 30 to use your credit in the new pay-as-you-go subscription.
-Your Azure free account includes a *specified quantity* of free services for 12 months and a set of services that are always free. Only some tiers of services are available for free within certain quantities. For example, Azure has many virtual machines intended for different needs. The free account only includes access to one type of VM for freeΓÇöthe B1S Burstable B series thatΓÇÖs usable for up to 750 hours per month. By staying in the free account limits, you can use the free services in various configurations. For more information about the Azure free account and the products that are available for free, see [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/).
+Your Azure free account includes a *specified quantity* of free services for 12 months and a set of services that are always free. Only some tiers of services are available for free within certain quantities. For example, Azure has many virtual machines intended for different needs. The free account includes access to three types of VMs for freeΓÇöthe B1S, B2pts v2 (ARM-based), and B2ats v2 (AMD-based) burstable VMs that are usable for up to 750 hours per month. By staying in the free account limits, you can use the free services in various configurations. For more information about the Azure free account and the products that are available for free, see [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/).
## Create free services in the Azure portal
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
You can also exercise the control plane APIs by interacting with Azure Digital T
You can also exercise the data plane APIs by interacting with Azure Digital Twins through the [CLI](/cli/azure/dt).
-## Usage notes
+## Usage and authentication notes
This section contains more detailed information about using the APIs and SDKs.
-Here's some general information:
-* The underlying SDK is `Azure.Core`. See the [Azure namespace documentation](/dotnet/api/azure?view=azure-dotnet&preserve-view=true) for reference on the SDK infrastructure and types.
+### API notes
+
+Here's some general information for calling the Azure Digital Twins APIs directly.
* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [Call the Azure Digital Twins APIs with Postman](how-to-use-postman-with-digital-twins.md). * Azure Digital Twins doesn't currently support Cross-Origin Resource Sharing (CORS). For more info about the impact and resolution strategies, see [Cross-Origin Resource Sharing (CORS) for Azure Digital Twins](concepts-security.md#cross-origin-resource-sharing-cors).
-Here are some details about authentication:
-* To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with different kinds of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true).
-* You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true), which you'll likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true).
+Here's some more information about authentication for API requests.
+* One way to generate a bearer token for Azure Digital Twins API requests is with the [az account get-access-token](/cli/azure/account#az-account-get-access-token()) CLI command. For detailed instructions, see [Get bearer token](how-to-use-postman-with-digital-twins.md#get-bearer-token).
* Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance exists. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned even if the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
-Here are some details about functions and returned data:
+### SDK notes
+
+The underlying SDK for Azure Digital Twins is `Azure.Core`. See the [Azure namespace documentation](/dotnet/api/azure) for reference on the SDK infrastructure and types.
+
+Here's some more information about authentication with the SDKs.
+* Start by instantiating the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with different kinds of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity).
+* You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential), which you'll likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential).
+
+Here's some more information about functions and returned data.
* All service API calls are exposed as member functions on the `DigitalTwinsClient` class. * All service functions exist in synchronous and asynchronous versions.
-* All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see its [reference documentation](/dotnet/api/azure.requestfailedexception?view=azure-dotnet&preserve-view=true).
-* Most service methods return `Response<T>` or (`Task<Response<T>>` for the asynchronous calls), where `T` is the class of return object for the service call. The [Response](/dotnet/api/azure.response-1?view=azure-dotnet&preserve-view=true) class encapsulates the service return and presents return values in its `Value` field.
-* Service methods with paged results return `Pageable<T>` or `AsyncPageable<T>` as results. For more about the `Pageable<T>` class, see its [reference documentation](/dotnet/api/azure.pageable-1?view=azure-dotnet&preserve-view=true); for more about `AsyncPageable<T>`, see its [reference documentation](/dotnet/api/azure.asyncpageable-1?view=azure-dotnet&preserve-view=true).
+* All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see its [reference documentation](/dotnet/api/azure.requestfailedexception).
+* Most service methods return `Response<T>` or (`Task<Response<T>>` for the asynchronous calls), where `T` is the class of return object for the service call. The [Response](/dotnet/api/azure.response-1) class encapsulates the service return and presents return values in its `Value` field.
+* Service methods with paged results return `Pageable<T>` or `AsyncPageable<T>` as results. For more about the `Pageable<T>` class, see its [reference documentation](/dotnet/api/azure.pageable-1); for more about `AsyncPageable<T>`, see its [reference documentation](/dotnet/api/azure.asyncpageable-1).
* You can iterate over paged results using an `await foreach` loop. For more about this process, see [Iterating with Async Enumerables in C# 8](/archive/msdn-magazine/2019/november/csharp-iterating-with-async-enumerables-in-csharp-8). * Service methods return strongly typed objects wherever possible. However, because Azure Digital Twins is based on models custom-configured by the user at runtime (via DTDL models uploaded to the service), many service APIs take and return twin data in JSON format.
The built-in role that provides all of these permissions is *Azure Digital Twins
>[!NOTE] > If you attempt an Jobs API call and you're missing write permissions to one of the graph element types you're trying to import, the job will skip that type and import the others. For example, if you have write access to models and twins, but not relationships, an attempt to bulk import all three types of element will only succeed in importing the models and twins. The job status will reflect a failure and the message will indicate which permissions are missing.
-Lastly, you'll need to grant the following **RBAC permissions** to the system-assigned managed identity of your Azure Digital Twins instance so that it can access input and output files in the Azure Blob Storage container:
+You'll also need to grant the following **RBAC permissions** to the system-assigned managed identity of your Azure Digital Twins instance so that it can access input and output files in the Azure Blob Storage container:
* [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) for the Azure Storage input blob container * [Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) for the Azure Storage output blob container
+Finally, generate a bearer token that can be used in your requests to the Jobs API. For instructions, see [Get bearer token](how-to-use-postman-with-digital-twins.md#get-bearer-token).
+ ### Format data The API accepts graph information input from an *NDJSON* file, which must be uploaded to an [Azure blob storage](../storage/blobs/storage-blobs-introduction.md) container. The file starts with a `Header` section, followed by the optional sections `Models`, `Twins`, and `Relationships`. You don't have to include all three types of graph data in the file, but any sections that are present must follow that order. Twins and relationships created with this API can optionally include initialization of their properties.
As the import job executes, a structured output log is generated by the service
When the job is complete, you can see the total number of ingested entities using the [BulkOperationEntityCount metric](how-to-monitor.md#bulk-operation-metrics-from-the-jobs-api).
-It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel?tabs=HTTP) from the Jobs API. Once the job has been canceled and is no longer running, you can delete it.
+It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel) from the Jobs API. Once the job has been canceled and is no longer running, you can delete it.
### Limits and considerations Keep the following considerations in mind while working with the Jobs API: * Currently, the Jobs API only supports "create" operations.
-* Import Jobs are not atomic operations. There is no rollback in the case of failure, partial job completion, or usage of the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel?tabs=HTTP).
+* Import Jobs are not atomic operations. There is no rollback in the case of failure, partial job completion, or usage of the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel).
* Only one bulk import job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Jobs API in [Azure Digital Twins limits](reference-service-limits.md). ## Monitor API metrics
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
az login ```
-2. Next, use the [az account get-access-token](/cli/azure/account#az-account-get-access-token) command to get a bearer token with access to the Azure Digital Twins service. In this command, you'll pass in the resource ID for the Azure Digital Twins service endpoint, in order to get an access token that can access Azure Digital Twins resources.
+2. Next, use the [az account get-access-token](/cli/azure/account#az-account-get-access-token()) command to get a bearer token with access to the Azure Digital Twins service. In this command, you'll pass in the resource ID for the Azure Digital Twins service endpoint, in order to get an access token that can access Azure Digital Twins resources.
The required context for the token depends on which set of APIs you're using, so use the following tabs to select between [data plane](concepts-apis-sdks.md#data-plane-apis) and [control plane](concepts-apis-sdks.md#control-plane-apis) APIs.
education-hub Enroll Renew Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/enroll-renew-subscription.md
This article describes the process for enrolling in Azure Dev Tools for Teaching
## Renew an existing subscription Your subscription doesn't renew automatically. To see if it's time to renew, go to the
-[Azure Dev Tools for Teaching Management portal](https://portal.azureforeducation.microsoft.com/)
+[Azure Dev Tools for Teaching Management portal](https://azureforeducation.microsoft.com/Order)
and look under **Subscriptions**.
-Sixty days before your membership expires, you'll receive email reminders to renew your subscription. In a renewal email, you can select the [renewal link](https://portal.azureforeducation.microsoft.com/).
+Sixty days before your membership expires, you'll receive email reminders to renew your subscription. In a renewal email, you can select the [renewal link](https://azureforeducation.microsoft.com/Order).
You can complete the renewal process as early as 90 days before the expiration date:
-1. Navigate to the [Azure Dev Tools for Teaching Management portal](https://portal.azureforeducation.microsoft.com/).
+1. Navigate to the [Azure Dev Tools for Teaching Management portal](https://azureforeducation.microsoft.com/Order).
1. Select **Enroll or Renew** on the Azure Dev Tools for Teaching banner.
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
This page is updated with the details about the upcoming release approximately a
<hr width = 100%>
+## September 2023
+
+### General Availability Fixed Pricing for Azure Data Manager for Energy
+Azure Data Manager for Energy is now available in Brazil South Region. Both developer tier and standard tier are available in Brazil South region. You can now select Brazil South as your preferred region when creating Azure Data Manage for Energy resource, using the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.AzureDataManagerforEnergy)".
++ ## August 2023 ### General Availability Fixed Pricing for Azure Data Manager for Energy
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
ExpressRoute Traffic Collector uses a sampling rate of 1:4096, which means 1 out
### How many flows can ExpressRoute Traffic Collector handle?
-ExpressRoute Traffic Collector can handle up to 30,000 flows a minute. In the event this limit is reached, excess flows are dropped. For more information, see [count of flows metric](expressroute-monitoring-metrics-alerts.md#count-of-flow-records-processedsplit-by-instances-or-expressroute-circuit) on a circuit.
+ExpressRoute Traffic Collector can handle up to 300,000 flows a minute. In the event this limit is reached, excess flows are dropped. For more information, see [count of flows metric](expressroute-monitoring-metrics-alerts.md#count-of-flow-records-processedsplit-by-instances-or-expressroute-circuit) on a circuit.
### Does ExpressRoute Traffic Collector support Virtual WAN? Yes, you can use Express Traffic Collector with ExpressRoute Direct circuits used in a Virtual WAN deployment. However, deploying ExpressRoute Traffic Collector within a Virtual WAN hub isnΓÇÖt supported. You can deploy ExpressRoute Traffic collector in a spoke virtual network and ingest flow logs to a Log Analytics workspace.
+### Does ExpressRoute Traffic Collector support ExpressRoute provider ports?
+
+For supported ExpressRoute provider ports contact ErTCasks@microsoft.com.
+ ### What is the effect of maintenance on flow logging? You should experience minimal to no disruption during maintenance on your ExpressRoute Traffic Collector. ExpressRoute Traffic Collector has multiple instances on different update domains, during an upgrade, instances are taken offline one at a time. While you may experience lower ingestion of sample flows into the Log Analytics workspace, the ExpressRoute Traffic Collector itself doesn't experience any downtime. Loss of sampled flows during maintenance shouldn't affect network traffic analysis, when sampled data is aggregated over a longer time frame.
ExpressRoute Traffic Collector deployment by default has availability zones enab
### How should I incorporate ExpressRoute Traffic Collector in my disaster recovery plan?
-You can associate a single ExpressRoute Direct circuit with multiple ExpressRoute Traffic Collectors deployed in different Azure region within a given geo-political region. It's recommended that you associate your ExpressRoute Direct circuit with multiple ExpressRoute Traffic Collectors as part of your disaster recovery and high availability plan.
+You can associate a single ExpressRoute Direct circuit with multiple ExpressRoute Traffic Collectors deployed in different Azure region within a given geo-political region. It's recommended that you associate your ExpressRoute Direct circuit with multiple ExpressRoute Traffic Collectors as part of your disaster recovery and high availability plan.
## Privacy
expressroute How To Configure Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-traffic-collector.md
Title: Configure Traffic Collector for ExpressRoute Direct (Preview)
+ Title: Configure Traffic Collector for ExpressRoute Direct
description: This article shows you how to create an ExpressRoute Traffic Collector resource and import logs into a Log Analytics workspace. Previously updated : 08/09/2023 Last updated : 10/09/2023 #Customer intent: As a network engineer, I want to configure ExpressRoute Traffic Collector to import flow logs into a Log Analytics workspace.
-# Configure Traffic Collector for ExpressRoute Direct (Preview)
+# Configure Traffic Collector for ExpressRoute Direct
This article helps you deploy an ExpressRoute Traffic Collector using the Azure portal. You learn how to add and remove an ExpressRoute Traffic Collector, associate it to an ExpressRoute Direct circuit and Log Analytics workspace. Once the ExpressRoute Traffic Collector is deployed, sampled flow logs get imported into a Log Analytics workspace. For more information, see [About ExpressRoute Traffic Collector](traffic-collector.md).
This article helps you deploy an ExpressRoute Traffic Collector using the Azure
- An ExpressRoute Direct circuit with Private or Microsoft peering configured. - A Log Analytics workspace (Create new or use existing workspace).
+- For ExpressRoute provider support contact ErTCasks@microsoft.com.
## Limitations - ExpressRoute Traffic Collector supports a maximum ExpressRoute Direct circuit size of 100 Gbps. - You can associate up to 20 ExpressRoute Direct circuits with ExpressRoute Traffic Collector. The total circuit bandwidth can't exceed 100 Gbps. - The ExpressRoute Direct circuit, Traffic Collector and the Log Analytics workspace must be in the same geo-political region. Cross geo-political resource association isn't supported. -- The ExpressRoute Direct circuit and Traffic Collector must be deployed in the same subscription. Cross subscription deployments aren't available. > [!NOTE] > - Log Analytics and ExpressRoute Traffic Collector can be deployed in a different subscription.
For more information, see [Identity and access management](../active-directory/f
1. On the **Select ExpressRoute circuit** tab, select **+ Add ExpressRoute Circuits**.
-1. On the **Add Circuits** page, select the checkbox next to the circuit you would like Traffic Collector to monitor and then select **Add**. Select **Next** to configure where logs gets forwarded to.
+1. On the **Add Circuits** page, select the checkbox next to the circuit you would like Traffic Collector to monitor and then select **Add**. Select **Next** to configure where logs get forwarded to.
:::image type="content" source="./media/how-to-configure-traffic-collector/select-circuits.png" alt-text="Screenshot of the select ExpressRoute circuits tab and add circuits page.":::
expressroute Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md
Title: Azure ExpressRoute Traffic Collector (Preview)
+ Title: Azure ExpressRoute Traffic Collector
description: Learn about ExpressRoute Traffic Collector and the different use cases where this feature is helpful. Previously updated : 08/21/2023 Last updated : 10/09/2023
-# Azure ExpressRoute Traffic Collector (Preview)
+# Azure ExpressRoute Traffic Collector
ExpressRoute Traffic Collector enables sampling of network flows sent over your ExpressRoute Direct circuits. Flow logs get sent to a [Log Analytics workspace](../azure-monitor/logs/log-analytics-overview.md) where you can create your own log queries for further analysis. You can also export the data to any visualization tool or SIEM (Security Information and Event Management) of your choice. Flow logs can be enabled for both private peering and Microsoft peering with ExpressRoute Traffic Collector.
Flow logs can help you look into various traffic insights. Some common use cases
## Flow log collection and sampling
-Flow logs are collected at an interval of every 1 minute. All packets collected for a given flow get aggregated and imported into a Log Analytics workspace for further analysis. During flow collection, not every packet is captured into its own flow record. ExpressRoute Traffic Collector uses a sampling rate of 1:4096, meaning 1 out of every 4096 packets gets captured. Therefore, sampling rate short flows (in total bytes) may not get collected. This sampling size doesn't affect network traffic analysis when sampled data is aggregated over a longer period of time. Flow collection time and sampling rate are fixed and can't be changed.
+Flow logs are collected at an interval of every 1 minute. All packets collected for a given flow get aggregated and imported into a Log Analytics workspace for further analysis. During flow collection, not every packet is captured into its own flow record. ExpressRoute Traffic Collector uses a sampling rate of 1:4096, meaning 1 out of every 4096 packets gets captured. Therefore, sampling rate short flows (in total bytes) might not get collected. This sampling size doesn't affect network traffic analysis when sampled data is aggregated over a longer period of time. Flow collection time and sampling rate are fixed and can't be changed.
## Flow log schema
-| Column | Type | Description |
-| - | -- | |
-| ATCRegion | string | ExpressRoute Traffic Collector (ATC) deployment region. |
-| ATCResourceId | string | Azure resource ID of ExpressRoute Traffic Collector (ATC). |
-| BgpNextHop | string | Border Gateway Protocol (BGP) next hop as defined in the routing table. |
-| DestinationIp | string | Destination IP address. |
-| DestinationPort | int | TCP destination port. |
-| Dot1qCustomerVlanId | int | Dot1q Customer VlanId. |
-| Dot1qVlanId | int | Dot1q VlanId. |
-| DstAsn | int | Destination Autonomous System Number (ASN). |
-| DstMask | int | Mask of destination subnet. |
-| DstSubnet | string | Destination subnet of destination IP. |
-| ExRCircuitDirectPortId | string | Azure resource ID of Express Route Circuit's direct port. |
-| ExRCircuitId | string | Azure resource ID of Express Route Circuit. |
-| ExRCircuitServiceKey | string | Service key of Express Route Circuit. |
-| FlowRecordTime | datetime | Timestamp (UTC) when Express Route Circuit emitted this flow record. |
-| Flowsequence | long | Flow sequence of this flow. |
-| IcmpType | int | Protocol type as specified in IP header. |
-| IpClassOfService | int | IP Class of service as specified in IP header. |
-| IpProtocolIdentifier | int | Protocol type as specified in IP header. |
-| IpVerCode | int | IP version as defined in the IP header. |
-| MaxTtl | int | Maximum time to live (TTL) as defined in the IP header. |
-| MinTtl | int | Minimum time to live (TTL) as defined in the IP header. |
-| NextHop | string | Next hop as per forwarding table. |
-| NumberOfBytes | long | Total number of bytes of packets captured in this flow. |
-| NumberOfPackets | long | Total number of packets captured in this flow. |
-| OperationName | string | The specific ExpressRoute Traffic Collector operation that emitted this flow record. |
-| PeeringType | string | Express Route Circuit peering type. |
-| Protocol | int | Protocol type as specified in IP header. |
-| \_ResourceId | string | A unique identifier for the resource that the record is associated with |
-| SchemaVersion | string | Flow record schema version. |
-| SourceIp | string | Source IP address. |
-| SourcePort | int | TCP source port. |
-| SourceSystem | string | |
-| SrcAsn | int | Source Autonomous System Number (ASN). |
-| SrcMask | int | Mask of source subnet. |
-| SrcSubnet | string | Source subnet of source IP. |
-| \_SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
-| TcpFlag | int | TCP flag as defined in the TCP header. |
-| TenantId | string | |
-| TimeGenerated | datetime | Timestamp (UTC) when the ExpressRoute Traffic Collector emitted this flow record. |
-| Type | string | The name of the table |
+| Column | Type | Description |
+|--|--|--|
+| ATCRegion | string | ExpressRoute Traffic Collector (ATC) deployment region. |
+| ATCResourceId | string | Azure resource ID of ExpressRoute Traffic Collector (ATC). |
+| BgpNextHop | string | Border Gateway Protocol (BGP) next hop as defined in the routing table. |
+| DestinationIp | string | Destination IP address. |
+| DestinationPort | int | TCP destination port. |
+| Dot1qCustomerVlanId | int | Dot1q Customer VlanId. |
+| Dot1qVlanId | int | Dot1q VlanId. |
+| DstAsn | int | Destination Autonomous System Number (ASN). |
+| DstMask | int | Mask of destination subnet. |
+| DstSubnet | string | Destination subnet of destination IP. |
+| ExRCircuitDirectPortId | string | Azure resource ID of Express Route Circuit's direct port. |
+| ExRCircuitId | string | Azure resource ID of Express Route Circuit. |
+| ExRCircuitServiceKey | string | Service key of Express Route Circuit. |
+| FlowRecordTime | datetime | Timestamp (UTC) when Express Route Circuit emitted this flow record. |
+| Flowsequence | long | Flow sequence of this flow. |
+| IcmpType | int | Protocol type as specified in IP header. |
+| IpClassOfService | int | IP Class of service as specified in IP header. |
+| IpProtocolIdentifier | int | Protocol type as specified in IP header. |
+| IpVerCode | int | IP version as defined in the IP header. |
+| MaxTtl | int | Maximum time to live (TTL) as defined in the IP header. |
+| MinTtl | int | Minimum time to live (TTL) as defined in the IP header. |
+| NextHop | string | Next hop as per forwarding table. |
+| NumberOfBytes | long | Total number of bytes of packets captured in this flow. |
+| NumberOfPackets | long | Total number of packets captured in this flow. |
+| OperationName | string | The specific ExpressRoute Traffic Collector operation that emitted this flow record. |
+| PeeringType | string | Express Route Circuit peering type. |
+| Protocol | int | Protocol type as specified in IP header. |
+| \_ResourceId | string | A unique identifier for the resource that the record is associated with |
+| SchemaVersion | string | Flow record schema version. |
+| SourceIp | string | Source IP address. |
+| SourcePort | int | TCP source port. |
+| SourceSystem | string | |
+| SrcAsn | int | Source Autonomous System Number (ASN). |
+| SrcMask | int | Mask of source subnet. |
+| SrcSubnet | string | Source subnet of source IP. |
+| \_SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
+| TcpFlag | int | TCP flag as defined in the TCP header. |
+| TenantId | string | |
+| TimeGenerated | datetime | Timestamp (UTC) when the ExpressRoute Traffic Collector emitted this flow record. |
+| Type | string | The name of the table |
## Region availability ExpressRoute Traffic Collector is supported in the following regions:
-### North America
-- Canada East-- Canada Central-- Central US-- Central US EUAP-- North Central US -- South Central US -- West Central US-- East US-- East US 2-- West US -- West US 2 -- West US 3-
-### South America
-- Brazil South-- Brazil Southeast-
-### Europe
-- West Europe-- North Europe-- UK South-- UK West-- France Central-- France South-- Germany North-- Sweden Central-- Sweden South-- Switzerland North-- Switzerland West-- Norway East-- Norway West-
-### Asia
-- East Asia-- Central India-- South India-- Japan West-- Korea South-- UAE North-
-### Africa
-- South Africa North-- South Africa West-
-### Pacific
-- Australia Central-- Australia Central 2-- Australia East-- Australia Southeast
+| Region | Region Name |
+| | -- |
+| North American | <ul><li>Canada East</li><li>Canada Central</li><li>Central US</li><li>Central US EUAP</li><li>North Central US</li><li>South Central US</li><li>West Central US</li><li>East US</li><li>East US 2</li><li>West US</li><li>West US 2</li><li>West US 3</li></ul> |
+| South America | <ul><li>Brazil South</li><li>Brazil Southeast</li></ul> |
+| Europe | <ul><li>West Europe</li><li>North Europe</li><li>UK South</li><li>UK West</li><li>France Central</li><li>France South</li><li>Germany North</li><li>Sweden Central</li><li>Sweden South</li><li>Switzerland North</li><li>Switzerland West</li><li>Norway East</li><li>Norway West</li></ul> |
+| Asia | <ul><li>East Asia</li><li>Central India</li><li>South India</li><li>Japan West</li><li>Korea South</li><li>UAE North</li></ul> |
+| Africa | <ul><li>South Africa North</li><li>South Africa West</li></ul> |
+| Pacific | <ul><li>Australia Central</li><li>Australia Central 2</li><li>Australia East</li><li>Australia Southeast</li></ul> |
+
+## Pricing
+
+| Zone | Gateway per hour | Data processed per GB |
+| - | - | |
+| Zone 1 | $0.60/hour | $0.10/GB |
+| Zone 2 | $0.80/hour | $0.20/GB |
+| Zone 3 | $0.80/hour | $0.20/GB |
## Next steps
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 07/05/2023 Last updated : 10/10/2023
The following Azure Firewall preview features are available publicly for you to
## Feature flags
-As new features are released to preview, some of them will be behind a feature flag. To enable the functionality in your environment, you must enable the feature flag on your subscription. These features are applied at the subscription level for all firewalls (VNet firewalls and SecureHub firewalls).
+As new features are released to preview, some of them are behind a feature flag. To enable the functionality in your environment, you must enable the feature flag on your subscription. These features are applied at the subscription level for all firewalls (virtual network firewalls and SecureHub firewalls).
-This article will be updated to reflect the features that are currently in preview with instructions to enable them. When the features move to General Availability (GA), they're available to all customers without the need to enable a feature flag.
+This article is updated to reflect the features that are currently in preview with instructions to enable them. When the features move to General Availability (GA), they're available to all customers without the need to enable a feature flag.
## Preview features
For more information, see [Azure Firewall Explicit proxy (preview)](explicit-pro
### Resource Health (preview) With the Azure Firewall Resource Health check, you can now diagnose and get support for service problems that affect your Azure Firewall resource. Resource Health allows IT teams to receive proactive notifications on potential health degradations, and recommended mitigation actions per each health event type. The resource health is also available in a dedicated page in the Azure portal resource page.
-Starting in August 2023, this preview will be automatically enabled on all firewalls and no action will be required to enable this functionality.
+Starting in August 2023, this preview is automatically enabled on all firewalls and no action is required to enable this functionality.
For more information, see [Resource Health overview](../service-health/resource-health-overview.md). ### Top flows (preview) and Flow trace logs (preview)
You can configure Azure Firewall to auto-learn both registered and private range
### Embedded Firewall Workbooks (preview)
-Azure Firewall predefined workbooks are two clicks away and fully available from the **Monitoring** section in the Azure Firewall portal UI.
+Azure Firewall predefined workbooks are two selections away and fully available from the **Monitoring** section in the Azure Firewall portal UI.
For more information, see [Azure Firewall: New Monitoring and Logging Updates](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-new-monitoring-and-logging-updates/ba-p/3897897#:~:text=Embedded%20Firewall%20Workbooks%20are%20now%20in%20public%20preview)
+### Parallel IP Group updates (preview)
+
+You can now update multiple IP Groups in parallel at the same time. This is useful for administrators who want to make configuration changes more quickly and at scale, especially when making those changes using a dev ops approach (templates, ARM template, CLI, and PowerShell).
+
+For more information, see [IP Groups in Azure Firewall](ip-groups.md#parallel-ip-group-updates-preview).
+ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Firewall Sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-sentinel-overview.md
+
+ Title: Azure Firewall with Microsoft Sentinel overview
+description: This article shows you how you can optimize security using the Azure Firewall solution for Microsoft Sentinel.
++++ Last updated : 10/09/2023++
+# Azure Firewall with Microsoft Sentinel overview
+
+You can now get both detection and prevention in the form of an easy-to-deploy Azure Firewall solution for Azure Sentinel.
+
+Security is a constant balance between proactive and reactive defenses. They're both equally important, and neither can be neglected. Effectively protecting your organization means constantly optimizing both prevention and detection.
+
+Combining prevention and detection allows you to ensure that you both prevent sophisticated threats when you can, while also maintaining an *assume breach mentality* to detect and quickly respond to cyber attacks.
++
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
++
+## Key capabilities
+
+When you integrate Azure Firewall with Microsoft Sentinel, you enable the following capabilities:
+
+- Monitor and visualize Azure Firewall activities
+- Detect threats and apply AI-assisted investigation capabilities
+- Automate responses and correlation to other sources
+
+The entire experience is packaged as a solution in the Microsoft Sentinel marketplace, which means it can be deployed relatively easily.
+
+## Deploy and enable the Azure Firewall solution for Microsoft Sentinel
+
+You can quickly deploy the solution from the Content hub. From your Microsoft Sentinel workspace, select **Analytics** and then **More content at Content hub**. Search for and select **Azure Firewall** and then select **Install**.
+
+Once installed, select **Manage** follow all the steps in the wizard, pass validation, and create the solution. With just a few selections, all content, including connectors, detections, workbooks, and playbooks are deployed in your Microsoft Sentinel workspace.
+
+## Monitor and visualize Azure Firewall activities
+
+The Azure Firewall workbook allows you to visualize Azure Firewall events. With this workbook, you can:
+
+- Learn about your application and network rules
+- See statistics for firewall activities across URLs, ports, and addresses
+- Filter by firewall and resource group
+- Dynamically filter per category with easy-to-read data sets when investigating an issue in the logs.
+
+The workbook provides a single dashboard for ongoing monitoring of your firewall activity. When it comes to threat detection, investigation, and response, the Azure Firewall solution also provides built-in detection and hunting capabilities.
+
+## Detect threats and use AI-assisted investigation capabilities
+
+The solutionΓÇÖs detection rules provide Microsoft Sentinel a powerful method for analyzing Azure Firewall signals to detect traffic representing malicious activity patterns traversing through the network. This allows rapid response and remediation of the threats.
+
+The attack stages an adversary pursues within the firewall solution are segmented based on the [MITRE ATT&CK](https://attack.mitre.org/) framework. The MITRE framework is a series of steps that trace stages of a cyber attack from the early reconnaissance stages to the exfiltration of data. The framework helps defenders understand and combat ransomware, security breaches, and advanced attacks.
+
+The solution includes detections for common scenarios an adversary might use as part of the attack, spanning from the discovery stage (gaining knowledge about the system and internal network) through the command-and-control (C2) stage (communicating with compromised systems to control them) to the exfiltration stage (adversary trying to steal data from the organization).
+
+| Detection rule | What does it do? | What does it indicate? |
+| | | |
+| Port scan | Identifies a source IP scanning multiple open ports on the Azure Firewall. | Malicious scanning of ports by an attacker, trying to reveal open ports in the organization that can be compromised for initial access. |
+| Port sweep | Identifies a source IP scanning the same open ports on the Azure Firewall different IPs. | Malicious scanning of a port by an attacker trying to reveal IPs with specific vulnerable ports open in the organization. |
+| Abnormal deny rate for source IP | Identifies an abnormal deny rate for a specific source IP to a destination IP based on machine learning done during a configured period. | Potential exfiltration, initial access, or C2, where an attacker tries to exploit the same vulnerability on machines in the organization but Azure Firewall rules blocks it. |
+| Abnormal Port to protocol | Identifies communication for a well-known protocol over a nonstandard port based on machine learning done during an activity period. | Malicious communication (C2) or exfiltration by attackers trying to communicate over known ports (SSH, HTTP) but donΓÇÖt use the known protocol headers that match the port number. |
+| Multiple sources affected by the same TI destination | Identifies multiple machines that are trying to reach out to the same destination blocked by threat intelligence (TI) in the Azure Firewall. | An attack on the organization by the same attack group trying to exfiltrate data from the organization. |
+
+### Hunting queries
+
+Hunting queries are a tool for the security researcher to look for threats in the network of an organization, either after an incident has occurred or proactively to discover new or unknown attacks. To do this, security researchers look at several indicators of compromise (IOCs). The built-in Azure Sentinel hunting queries in the Azure Firewall solution give security researchers the tools they need to find high-impact activities from the firewall logs. Several examples include:
+
+| Hunting query | What does it do? | What does it indicate? |
+| | | |
+| First time a source IP connects to destination port | Helps to identify a common indication of an attack (IOA) when a new host or IP tries to communicate with a destination using a specific port. | Based on learning the regular traffic during a specified period. |
+| First time source IP connects to a destination | Helps to identify an IOA when malicious communication is done for the first time from machines that never accessed the destination before. | Based on learning the regular traffic during a specified period. |
+| Source IP abnormally connects to multiple destinations | Identifies a source IP that abnormally connects to multiple destinations. | Indicates initial access attempts by attackers trying to jump between different machines in the organization, exploiting lateral movement path or the same vulnerability on different machines to find vulnerable machines to access. |
+| Uncommon port for the organization | Identifies abnormal ports used in the organization network. | An attacker can bypass monitored ports and send data through uncommon ports. This allows the attackers to evade detection from routine detection systems. |
+| Uncommon port connection to destination IP | Identifies abnormal ports used by machines to connect to a destination IP. | An attacker can bypass monitored ports and send data through uncommon ports. This can also indicate an exfiltration attack from machines in the organization by using a port that has never been used on the machine for communication. |
+
+### Automate response and correlation to other sources
+
+Lastly, the Azure Firewall also includes Azure Sentinel playbooks, which enable you to automate response to threats. For example, say the firewall logs an event where a particular device on the network tries to communicate with the Internet via the HTTP protocol over a nonstandard TCP port. This action triggers a detection in Azure Sentinel. The playbook automates a notification to the security operations team via Microsoft Teams, and the security analysts can block the source IP address of the device with a single selection. This prevents it from accessing the Internet until an investigation can be completed. Playbooks allow this process to be much more efficient and streamlined.
+
+## Real world example
+
+LetΓÇÖs look at what the fully integrated solution looks like in a real-world scenario.
+
+### The attack and initial prevention by Azure Firewall
+
+A sales representative in the company has accidentally opened a phishing email and opened a PDF file containing malware. The malware immediately tries to connect to a malicious website but Azure Firewall blocks it. The firewall detected the domain using the Microsoft threat intelligence feed it consumes.
+
+### The response
+
+The connection attempt triggers a detection in Azure Sentinel and starts the playbook automation process to notify the security operations team via a Teams channel. There, the analyst can block the computer from communicating with the Internet. The security operations team then notifies the IT department which removes the malware from the sales representativeΓÇÖs computer. However, taking the proactive approach and looking deeper, the security researcher applies the Azure Firewall hunting queries and runs the **Source IP abnormally connects to multiple destinations** query. This reveals that the malware on the infected computer tried to communicate with several other devices on the broader network and tried to access several of them. One of those access attempts succeeded, as there was no proper network segmentation to prevent the lateral movement in the network, and the new device had a known vulnerability the malware exploited to infect it.
+
+### The result
+
+The security researcher removed the malware from the new device, completed mitigating the attack, and discovered a network weakness in the process.
+
+## Next step
++
+> [!div class="nextstepaction"]
+> [Learn more about Microsoft Sentinel](../sentinel/overview.md)
+>
+> [Microsoft security](https://www.microsoft.com/en-us/security/business)
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
Previously updated : 01/10/2023 Last updated : 10/10/2023
You can now select **IP Group** as a **Source type** or **Destination type** for
![IP Groups in Firewall](media/ip-groups/fw-ipgroup.png)
+## Parallel IP Group updates (preview)
+
+You can now update multiple IP Groups in parallel at the same time. This is particularly useful for administrators who want to make configuration changes more quickly and at scale, especially when making those changes using a dev ops approach (templates, ARM, CLI, and Azure PowerShell).
+
+With this support, you can now:
+
+- Update 20 IP Groups at a time
+- Update the firewall and firewall policy during IP Group updates
+- Use the same IP Group in parent and child policy
+- Update multiple IP Groups referenced by firewall policy or classic firewall simultaneously
+- Receive new and improved error messages
+ - Fail and succeed states
+
+ For example, if there is an error with one IP Group update out of 20 parallel updates, the other updates proceed, and the errored IP Group fails. In addition, if the IP Group update fails, and the firewall is still healthy, the firewall remains in a *Succeeded* state. To check if the IP Group update has failed or succeeded, you can view the status on the IP Group resource.
+ ## Region availability IP Groups are available in all public cloud regions.
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
Preserve unmatched path allows you to append the remaining path after the source
| Preserve unmatched path | Source pattern | Destination | Incoming request | Content served from origin | |--|--|--|--|--| | Yes | / | /foo/ | contoso.com/sub/1.jpg | /foo/sub/1.jpg |
-| Yes | /sub/ | /foo/ | contoso.com/sub/image/1.jpg | /foo/image/1.jpg` |
+| Yes | /sub/ | /foo/ | contoso.com/sub/image/1.jpg | /foo/image/1.jpg |
| No | /sub/ | /foo/2.jpg | contoso.com/sub/image/1.jpg | /foo/2.jpg | ::: zone-end
governance Compliance States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md
The compliance percentage is determined by dividing **Compliant**, **Exempt**, a
**Exempt**, **Conflicting**, and **Error** states. ```text
-overall compliance % = (compliant + exempt + unknown) / (compliant + exempt + unknown + non-compliant + conflicting + error)
+overall compliance % = (compliant + exempt + unknown + protected) / (compliant + exempt + unknown + non-compliant + conflicting + error + protected)
``` In the image shown, there are 20 distinct resources that are applicable and only one is **Non-compliant**.
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
If not, then a deployment to enable is executed.
"equals": "Microsoft.Sql/servers/databases" }, "then": {
- "effect": "DeployIfNotExists",
+ "effect": "deployIfNotExists",
"details": { "type": "Microsoft.Sql/servers/databases/transparentDataEncryption", "name": "current",
hdinsight-aks Hdinsight On Aks Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-autoscale-clusters.md
Schedule-based scaling changes the number of nodes in your cluster based on a sc
The following table describes the cluster types that are compatible with the Auto scale feature, and whatΓÇÖs available or planned.
-|Workload |Load Based |Schedule Base|
+|Workload |Load Based |Schedule Based|
|-|-|-| |Flink |Planned |Yes| |Trino |Planned |Yes**|
hdinsight-aks Prerequisites Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/prerequisites-subscription.md
Last updated 08/29/2023
If you're using Azure subscription first time for HDInsight on AKS, the following features might need to be enabled.
+## Tenant registration
+
+If you're trying to onboard a new tenant to HDInsight on AKS, you need to provide consent to first party App of HDInsight on AKS to Access API. This app will try to provision the application used to authenticate cluster users and groups.
+
+> [!NOTE]
+> Resource owner would be able to run the command to provision the first party service principal on the given tenant.
+
+**Commands**:
+
+```azurecli
+az ad sp create --id d3d1a4fe-edb2-4b09-bc39-e41d342323d6
+```
+
+```azurepowershell
+New-AzureADServicePrincipal -AppId d3d1a4fe-edb2-4b09-bc39-e41d342323d6
+```
+ ## Enable features 1. Sign in to [Azure portal](https://portal.azure.com).
hdinsight-aks Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/whats-new.md
Title: What's new in HDInsight on AKS? (Preview)
description: An introduction to new concepts in HDInsight on AKS that aren't in HDInsight. Previously updated : 08/31/2023 Last updated : 10/10/2023 # What's new in HDInsight on AKS? (Preview)
The following table list shows the features of HDInsight on AKS that are current
| Logging and Monitoring | Log aggregation in Azure [log analytics](./how-to-azure-monitor-integration.md), for server logs, Cluster and Service metrics via [Managed Prometheus and Grafana](./monitor-with-prometheus-grafana.md), Support server metrics in [Azure monitor](/azure/azure-monitor/overview), Service Status page for monitoring the [Service health](./service-health.md) | | Auto Scale | Load based [Auto Scale](hdinsight-on-aks-autoscale-clusters.md#create-a-cluster-with-load-based-auto-scale), and Schedule based [Auto Scale](hdinsight-on-aks-autoscale-clusters.md#create-a-cluster-with-schedule-based-auto-scale) | | Customize and Configure Clusters | Support for [script actions](./manage-script-actions.md) during cluster creation, Support for [library management](./spark/library-management.md), [Service configuration](./service-configuration.md) settings after cluster creation |
-| Trino | Support for [Trino catalogs](./trino/trino-add-catalogs.md), [Trino CLI Support](./trino/trino-ui-command-line-interface.md), [DBeaver](./trino/trino-ui-dbeaver.md) support for query submission, Add or remove plugins and [connectors](./trino/trino-connectors.md), Support for [logging query](./trino/trino-query-logging.md) events, Support for [scan query statistics](./trino/trino-scan-stats.md) for any [Connector](./trino/trino-connectors.md) in Trino dashboard, Support for Trino dashboard to monitor queries, [Query Caching](./trino/trino-caching.md), Integration with PowerBI, Integration with [Apache Superset](./trino/trino-superset.md), Redash, Support for multiple [connectors](./trino/trino-connectors.md) |
+| Trino | Support for [Trino catalogs](./trino/trino-add-catalogs.md), [Trino CLI Support](./trino/trino-ui-command-line-interface.md), [DBeaver](./trino/trino-ui-dbeaver.md) support for query submission, Add or remove [plugins](./trino/trino-custom-plugins.md) and [connectors](./trino/trino-connectors.md), Support for [logging query](./trino/trino-query-logging.md) events, Support for [scan query statistics](./trino/trino-scan-stats.md) for any [Connector](./trino/trino-connectors.md) in Trino dashboard, Support for Trino [dashboard](./trino/trino-ui.md) to monitor queries, [Query Caching](./trino/trino-caching.md), Integration with PowerBI, Integration with [Apache Superset](./trino/trino-superset.md), Redash, Support for multiple [connectors](./trino/trino-connectors.md) |
| Flink | Support for Flink native web UI, Flink support with HMS for [DStream](./flink/use-hive-metastore-datastream.md), Submit jobs to the cluster using [REST API and Azure Portal](./flink/flink-job-management.md), Run programs packaged as JAR files via the [Flink CLI](./flink/use-flink-cli-to-submit-jobs.md), Support for persistent Savepoints, Support for update the configuration options when the job is running, Connecting to multiple Azure | Spark | [Jupyter Notebook](./spark/submit-manage-jobs.md), Support for [Delta lake](./spark/azure-hdinsight-spark-on-aks-delta-lake.md) 2.0, Zeppelin Support, Support ATS, Support for Yarn History server interface, Job submission using SSH, Job submission using SDK and [Machine Learning Notebook](./spark/azure-hdinsight-spark-on-aks-delta-lake.md) |
The following table list shows the features of HDInsight on AKS that are current
| Ranger Support for Spark SQL | Q2 2024 | | Ranger ACLs on Storage Layer | Q2 2024 | | Support for One lake as primary container | Q2 2024 |--
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 07/28/2023 Last updated : 7/28/2023 # Archived release notes
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 09/13/2023 Last updated : 10/10/2023 # Azure HDInsight release notes
Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-rep
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases).
-## Release date: September 7th, 2023
+## Release date: September 7, 2023
This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2308221128**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
For workload specific versions, see
> [!IMPORTANT] > This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image **2308221128**. Customers are advised to plan accordingly.
-|CVE | Severity| CVE Title|
-|-|-|-|
-|[CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156)| Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |
+|CVE | Severity| CVE Title| Remark |
+|-|-|-|-|
+|[CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156)| Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |Included on 2308221128 image |
+|[CVE-2023-36419](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36419) | Important | Azure HDInsight Apache Oozie Workflow Scheduler Elevation of Privilege Vulnerability | Apply [Script action](https://hdiconfigactions2.blob.core.windows.net/msrc-script/script_action.sh) on your clusters |
## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
-* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be implemented by September 30th, 2023.
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be implemented by September 30, 2023.
* Cluster permissions for secure storage * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account. * In-line quota update.
- * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase.
+ * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the APdI call fails, then customers need to create a new support request for quota increase.
* HDInsight Cluster Creation with Custom VNets. * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before September 30, 2023.ΓÇ» * Basic and Standard A-series VMs Retirement. * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024. * Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
- * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September, 2023.ΓÇ»
+ * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before September 30, 2023.ΓÇ»
If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
You can't reuse the name of a key vault that has been soft-deleted until the ret
### Purge protection
-Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on, for example, via [CLI](./key-vault-recovery.md?tabs=azure-cli) or [PowerShell](./key-vault-recovery.md?tabs=azure-powershell). Purge protection is recommended when using keys for encryption to prevent data loss. Most Azure services that integrate with Azure Key Vault, such as Storage, require purge protection to prevent data loss.
+Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. Purge protection is recommended when using keys for encryption to prevent data loss. Most Azure services that integrate with Azure Key Vault, such as Storage, require purge protection to prevent data loss.
When purge protection is on, a vault or an object in the deleted state can't be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed. The default retention period is 90 days, but it's possible to set the retention policy interval to a value from 7 to 90 days through the Azure portal. Once the retention policy interval is set and saved it can't be changed for that vault.
+Purge Protection can be turned on via [CLI](./key-vault-recovery.md?tabs=azure-cli), [PowerShell](./key-vault-recovery.md?tabs=azure-powershell) or [Portal](./key-vault-recovery.md?tabs=azure-portal).
+ ### Permitted purge Permanently deleting, purging, a key vault is possible via a POST operation on the proxy resource and requires special privileges. Generally, only the subscription owner will be able to purge a key vault. The POST operation triggers the immediate and irrecoverable deletion of that vault.
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Suggested order of operations for manually upgrading a Basic Load Balancer in co
1. For Virtual Machine Scale Set backends, remove the Load Balancer association in the Networking settings and [update the instances](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md#performing-manual-upgrades) 1. Delete the Basic Load Balancer > [!NOTE]
- > For Virtual Machine Scale Set backends, you will need to remove the load balancer association in the Networking settings and update the instances prior to deletion of the Basic Load Balancer. Once removed, you will also need to [**update the instances**](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md#performing-manual-upgrades)
+ > For Virtual Machine Scale Set backends, you will need to remove the load balancer association in the Networking settings. Once removed, you will also need to [**update the instances**](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md#performing-manual-upgrades)
1. [Upgrade all Public IPs](../virtual-network/ip-services/public-ip-upgrade-portal.md) previously associated with the Basic Load Balancer and backend Virtual Machines to Standard SKU. For Virtual Machine Scale Sets, remove any instance-level public IP configuration, update the instances, then add a new one with Standard SKU and update the instances again. 1. Recreate the frontend configurations from the Basic Load Balancer on the newly created Standard Load Balancer, using the same public or private IP addresses as on the Basic Load Balancer 1. Update the load balancing and NAT rules to use the appropriate frontend configurations
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
ms.suite: integration
Previously updated : 10/18/2022 Last updated : 10/09/2023 # As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
As the logic app isn't running when these errors occur, you can't use the Kudu c
`C:\psping {storage-account-host-name}.table.core.windows.net:443`
- `C:\psping {storage-account-host-name}.file.core.windows.net:445`
+ `C:\psping {storage-account-host-name}.file.core.windows.net:443`
1. If the queries resolve from the VM, continue with the following steps:
logic-apps Logic Apps Enterprise Integration X12 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-message-settings.md
Previously updated : 08/15/2023 Last updated : 10/09/2023 # Reference for X12 message settings in agreements for Azure Logic Apps
You also need to disable EDI validation when you use these document version numb
To specify these document version numbers and message types, follow these steps:
+> [!NOTE]
+>
+> Each message with 837_P, 837_I, or 837_D type requires a separate agreement.
+ 1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use. For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
To specify these document version numbers and message types, follow these steps:
] ```
- In this `schemaReferences` section, add another entry that has these values:
-
- * `"messageId": "837_P"`
- * `"schemaVersion": "00501"`
- * `"schemaName": "X12_00501_837_P"`
-
- When you're done, your `schemaReferences` section looks like this:
+ Edit your `schemaReferences` section to look like the following example:
```json "schemaReferences": [ { "messageId": "837", "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- },
- {
- "messageId": "837_P",
- "schemaVersion": "00501",
"schemaName": "X12_00501_837_P" } ]
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
A compute instance:
* Has a job queue. * Runs jobs securely in a virtual network environment, without requiring enterprises to open up SSH port. The job executes in a containerized environment and packages your model dependencies in a Docker container.
-* Can run multiple small jobs in parallel. One job per core can run in parallel while the rest of the jobs are queued.
+* Can run multiple small jobs in parallel. One job per vCPU can run in parallel while the rest of the jobs are queued.
* Supports single-node multi-GPU [distributed training](how-to-train-distributed-gpu.md) jobs You can use compute instance as a local inferencing deployment target for test/debug scenarios.
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
The table summarizes support for physical servers, AWS VMs, and GCP VMs that you
## Replication appliance requirements
-If you set up the replication appliance manually, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2022, and complies with the support requirements.
+If you set up the replication appliance manually, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2016, and complies with the support requirements.
- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements). - Install MySQL on the appliance. Learn about [installation options](migrate-replication-appliance.md#mysql-installation).
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
Previously updated : 09/07/2023 Last updated : 10/10/2023 # Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster
You can opt out of telemetry, but make sure you understand this feature before d
- **`infogw.api.openshift.com`**: Used for Red Hat telemetry. - **`https://cloud.redhat.com/api/ingress`**: Used in the cluster for the insights operator that integrates with Red Hat Insights (required in 4.10 and earlier only). - **`https://console.redhat.com/api/ingress`**: Used in the cluster for the insights operator that integrates with Red Hat Insights.-- In OpenShift Container Platform, customers can opt out of reporting health and usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, and better understand how product upgrades clusters. Check details here: https://docs.openshift.com/container-platform/4.12/support/remote_health_monitoring/opting-out-of-remote-health-reporting.html.
private-link Configure Asg Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/configure-asg-private-endpoint.md
# Configure an application security group (ASG) with a private endpoint
-Azure Private endpoints support application security groups for network security. Private endpoints can be associated with an existing ASG in your current infrastructure along side virtual machines and other network resources.
+Azure Private endpoints support application security groups for network security. Private endpoints can be associated with an existing ASG in your current infrastructure alongside virtual machines and other network resources.
## Prerequisites
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-debug-sessions.md
Previously updated : 07/20/2023 Last updated : 10/09/2023 # Tutorial: Debug a skillset using Debug Sessions
All requests require an api-key on every request sent to your service. Having a
## Create data source, skillset, index, and indexer
-In this section, import a Postman collection containing a "buggy" workflow that you fix in this tutorial.
+In this section, you will import a Postman collection containing a "buggy" workflow that you will fix in this tutorial.
1. Start Postman and import the [DebugSessions.postman_collection.json](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/Debug-sessions) collection. If you're unfamiliar with Postman, see [this quickstart](search-get-started-rest.md).
search Hybrid Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-ranking.md
Last updated 09/27/2023
> [!IMPORTANT] > Hybrid search uses the [vector features](vector-search-overview.md) currently in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Reciprocal Rank Fusion (RRF) is an algorithm that evaluates the search scores from multiple, previously ranked results to produce a unified result set. In Azure Cognitive Search, RRF is used whenever there are two or more queries that execute in parallel. Each query produces a ranked result set, and RRF is used to merge and homogenize the rankings into a single result set, returned in the query response. Examples of scenarios where RRF is required include [*hybrid search*](hybrid-search-overview.md) and multiple vector queries executing concurrently.
+Reciprocal Rank Fusion (RRF) is an algorithm that evaluates the search scores from multiple, previously ranked results to produce a unified result set. In Azure Cognitive Search, RRF is used whenever there are two or more queries that execute in parallel. Each query produces a ranked result set, and RRF is used to merge and homogenize the rankings into a single result set, returned in the query response. Examples of scenarios where RRF is always used include [*hybrid search*](hybrid-search-overview.md) and multiple vector queries executing concurrently.
RRF is based on the concept of *reciprocal rank*, which is the inverse of the rank of the first relevant document in a list of search results. The goal of the technique is to take into account the position of the items in the original rankings, and give higher importance to items that are ranked higher in multiple lists. This can help improve the overall quality and reliability of the final ranking, making it more useful for the task of fusing multiple ordered search results.
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
Use this checklist to assist the design decisions for your search index.
+ Filterable fields are returned in arbitrary order, so consider making them sortable as well.
-1. Determine whether to use the default analyzer (`"analyzer": null`) or a different analyzer. [Analyzers](search-analyzers.md) are used to tokenize text fields during indexing and query execution. If strings are descriptive and semantically rich, or if you have translated strings, consider overriding the default with a [language analyzer](index-add-language-analyzers.md).
+1. Determine whether to use the default analyzer (`"analyzer": null`) or a different analyzer. [Analyzers](search-analyzers.md) are used to tokenize text fields during indexing and query execution.
+
+ For multi-lingual strings, consider a [language analyzer](index-add-language-analyzers.md).
+
+ For hyphenated strings or special characters, consider [specialized analyzers](index-add-custom-analyzers.md#built-in-analyzers). One example is [keyword](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html) that treats the entire contents of a field as a single token. This behavior is useful for data like zip codes, IDs, and some product names. For more information, see [Partial term search and patterns with special characters](search-query-partial-matching.md).
> [!NOTE] > Full text search is conducted over terms that are tokenized during indexing. If your queries fail to return the results you expect, [test for tokenization](/rest/api/searchservice/test-analyzer) to verify the string actually exists. You can try different analyzers on strings to see how tokens are produced for various analyzers.
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
Previously updated : 09/27/2023 Last updated : 10/09/2023 # Full text search in Azure Cognitive Search
-This article is for developers who need a deeper understanding of how full text search works in Azure Cognitive Search. For text queries, Azure Cognitive Search seamlessly delivers expected results in most scenarios, but occasionally you might get a result that seems "off" somehow. In these situations, having a background in the four stages of Lucene query execution (query parsing, lexical analysis, document matching, scoring) can help you identify specific changes to query parameters or index configuration that produce the desired outcome.
+Full text search is an approach in information retrieval that matches on plain text content stored in an index. For example, given a query string "hotels in San Diego on the beach", the search engine looks for content containing those terms. To make scans more efficient, query strings undergo lexical analysis: lower-casing all terms, removing stop words like "the", and reducing terms to primitive root forms. When matching terms are found, the search engine retrieves documents, ranks them in order of relevance, and returns the top results.
+
+Query execution can be complex. This article is for developers who need a deeper understanding of how full text search works in Azure Cognitive Search. For text queries, Azure Cognitive Search seamlessly delivers expected results in most scenarios, but occasionally you might get a result that seems "off" somehow. In these situations, having a background in the four stages of Lucene query execution (query parsing, lexical analysis, document matching, scoring) can help you identify specific changes to query parameters or index configuration that produce the desired outcome.
> [!NOTE] > Azure Cognitive Search uses [Apache Lucene](https://lucene.apache.org/) for full text search, but Lucene integration is not exhaustive. We selectively expose and extend Lucene functionality to enable the scenarios important to Azure Cognitive Search.
search Search Query Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-create.md
Previously updated : 09/25/2023 Last updated : 10/09/2023 # How to create a full-text query in Azure Cognitive Search
If you're building a query for [full text search](search-lucene-query-architectu
## Example of a full text query request
-In Azure Cognitive Search, a query is a read-only request against the docs collection of a single search index.
+In Azure Cognitive Search, a query is a read-only request against the docs collection of a single search index, with parameters that both inform query execution and shape the response coming back.
A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition to the request. For example, `searchFields` scopes query execution to specific fields, `select` specifies which fields are returned in results, and `count` returns the number of matches found in the index.
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
"searchMode": "all", "searchFields": "HotelName, Description, Address/City, Address/StateProvince, Tags", "select": "HotelName, Description, Address/City, Address/StateProvince, Tags",
+ "top": "10",
"count": "true" } ```
+**Key points:**
+++ **`search`** provides the match criteria, usually whole terms or phrases, with or without operators. Any field that is attributed as "searchable" in the index schema is a candidate for this parameter.+++ **`queryType`** sets the parser: `simple`, `full`. The [default simple query parser](search-query-simple-examples.md) is optimal for full text search. The [full Lucene query parser](search-query-lucene-examples.md) is for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. This parameter can also be set to `semantic` for [semantic ranking](semantic-search-overview.md) for advanced semantic modeling on the query response.+++ **`searchMode`** specifies whether matches are based on "all" criteria (favors precision) or "any" criteria (favors recall) in the expression. The default is "any". If you anticipate heavy use of Boolean operators, which is more likely in indexes that contain large text blocks (a content field or long descriptions), be sure to test queries with the **`searchMode=Any|All`** parameter to evaluate the impact of that setting on boolean search.+++ **`searchFields`** constrains query execution to specific searchable fields. During development, it's helpful to use the same field list for select and search. Otherwise a match might be based on field values that you can't see in the results, creating uncertainty as to why the document was returned.+
+Parameters used to shape the response:
+++ **`select`** specifies which fields to return in the response. Only fields marked as "retrievable" in the index can be used in a select statement.+++ **`top`** returns the specified number of best-matching documents. In this example, only 10 hits are returned. You can use top and skip (not shown) to page the results.+++ **`count`** tells you how many documents in the entire index match overall, which can be more than what are returned. +++ **`orderby`** is used if you want to sort results by a value, such as a rating or location. Otherwise, the default is to use the relevance score to rank results. A field must be attributed as "sortable" to be a candidate for this parameter.+ ## Choose a client For early development and proof-of-concept testing, start with Azure portal or the Postman app for making REST API calls. These approaches are interactive, useful for targeted testing, and help you assess the effects of different properties without having to write any code.
search Search Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-overview.md
Previously updated : 09/25/2023 Last updated : 10/09/2023 # Querying in Azure Cognitive Search
-Azure Cognitive Search offers a rich query language to support a broad range of scenarios, from free text search, to highly specified query patterns. This article describes query requests and the kinds of queries you can create.
-
-In Cognitive Search, a query is a full specification of a round-trip **`search`** operation, with parameters that both inform query execution and shape the response coming back. To illustrate, the following query example calls the [Search Documents (REST API)](/rest/api/searchservice/search-documents). It's a parameterized, free text query with a boolean operator, targeting the [hotels-sample-index](search-get-started-portal.md) documents collection. It also selects which fields are returned in results.
-
-```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
-{
- "queryType": "simple",
- "searchMode": "all",
- "search": "restaurant +view",
- "searchFields": "HotelName, Description, Address/City, Address/StateProvince, Tags",
- "select": "HotelName, Description, Address/City, Address/StateProvince, Tags",
- "top": "10",
- "count": "true",
- "orderby": "Rating desc"
-}
-```
-
-Parameters used during query execution include:
-
-+ **`queryType`** sets the parser: `simple`, `full`. The [default simple query parser](search-query-simple-examples.md) is optimal for full text search. The [full Lucene query parser](search-query-lucene-examples.md) is for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. This parameter can also be set to `semantic` for [semantic ranking](semantic-search-overview.md) for advanced semantic modeling on the query response.
-
-+ **`searchMode`** specifies whether matches are based on "all" criteria (favors precision) or "any" criteria (favors recall) in the expression. The default is "any".
-
-+ **`search`** provides the match criteria, usually whole terms or phrases, with or without operators. Any field that is attributed as "searchable" in the index schema is a candidate for this parameter.
-
-+ **`searchFields`** constrains query execution to specific searchable fields. During development, it's helpful to use the same field list for select and search. Otherwise a match might be based on field values that you can't see in the results, creating uncertainty as to why the document was returned.
-
-Parameters used to shape the response:
-
-+ **`select`** specifies which fields to return in the response. Only fields marked as "retrievable" in the index can be used in a select statement.
-
-+ **`top`** returns the specified number of best-matching documents. In this example, only 10 hits are returned. You can use top and skip (not shown) to page the results.
-
-+ **`count`** tells you how many documents in the entire index match overall, which can be more than what are returned.
-
-+ **`orderby`** is used if you want to sort results by a value, such as a rating or location. Otherwise, the default is to use the relevance score to rank results. A field must be attributed as "sortable" to be a candidate for this parameter.
-
-The above list is representative but not exhaustive. For the full list of parameters on a query request, see [Search Documents (REST API)](/rest/api/searchservice/search-documents).
+Azure Cognitive Search supports query constructs for a broad range of scenarios, from free-form text search, to highly specified query patterns, to vector search. All queries execute over a search index that stores searchable content.
<a name="types-of-queries"></a> ## Types of queries
-With a few notable exceptions, a full text query request iterates over inverted indexes that are structured for fast scans, where a match can be found in potentially any field, within any number of search documents. In Cognitive Search, the primary methodology for finding matches is either full text search or filters, but you can also implement other well-known search experiences like autocomplete, or geo-location search. The rest of this article summarizes queries in Cognitive Search and provides links to more information and examples.
-
-## Full text search
-
-Full text search accepts terms or phrases passed in a **`search`** parameter in all "searchable" fields in your index. Optional boolean operators in the query string can specify inclusion or exclusion criteria. Both the simple parser and full parser support full text search.
-
-In Cognitive Search, full text search is built on the Apache Lucene query engine. Query strings in full text search undergo lexical analysis to make scans more efficient. Analysis includes lower-casing all terms, removing stop words like "the" and reducing terms to primitive root forms. The default analyzer is Standard Lucene.
-
-When matching terms are found, the query engine reconstitutes a search document containing the match using the document key or ID to assemble field values, ranks the documents in order of relevance, and returns the top 50 (by default) in the response or a different number if you specified **`top`**.
-
-If you're implementing full text search, understanding how your content is tokenized will help you debug any query anomalies. Queries over hyphenated strings or special characters could necessitate using an analyzer other than the default standard Lucene to ensure the index contains the right tokens. You can override the default with [language analyzers](index-add-language-analyzers.md#language-analyzer-list) or [specialized analyzers](index-add-custom-analyzers.md#built-in-analyzers) that modify lexical analysis. One example is [keyword](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html) that treats the entire contents of a field as a single token. This is useful for data like zip codes, IDs, and some product names. For more information, see [Partial term search and patterns with special characters](search-query-partial-matching.md).
+| Query form | Parameter | Searchable content | Description |
+||--|-|
+| [full text search](search-lucene-query-architecture.md) | `search` | Inverted indexes of tokenized terms. | Full text queries iterate over inverted indexes that are structured for fast scans, where a match can be found in potentially any field, within any number of search documents. Text is analyzed and tokenized for full text search.|
+| [Vector search](vector-search-overview.md) | `vectors` | Vector indexes of generated embeddings. | Vector queries iterate over vector fields in a search index. |
+| [Hybrid search](hybrid-search-overview.md) | `search`, `vectors` | All of the above, in a single search index. | Combines text search and vector search in a single query request. Text search works on plain text content in "searchable" and "filterable" fields. Vector search works on content in vector fields. |
+| Others | `filters`, `facets`, `search=''&queryType=full` | Plain text and alphanumeric content.| Raw content, extracted verbatim from source documents, supporting filters and pattern matching queries like geo-spatial search, fuzzy search, and fielded search. |
-> [!TIP]
-> If you anticipate heavy use of Boolean operators, which is more likely in indexes that contain large text blocks (a content field or long descriptions), be sure to test queries with the **`searchMode=Any|All`** parameter to evaluate the impact of that setting on boolean search.
+This article brings focus to queries that work on plain text and alphanumeric content, extracted intact from original source, used for filters and other specialized query forms.
## Autocomplete and suggested queries
-[Autocomplete or suggested results](search-add-autocomplete-suggestions.md) are alternatives to **`search`** that fire successive query requests based on partial string inputs (after each character) in a search-as-you-type experience. You can use **`autocomplete`** and **`suggestions`** parameter together or separately, as described in [this tutorial](tutorial-csharp-type-ahead-and-suggestions.md), but you can't use them with **`search`**. Both completed terms and suggested queries are derived from index contents. The engine never returns a string or suggestion that is nonexistent in your index. For more information, see [Autocomplete (REST API)](/rest/api/searchservice/autocomplete) and [Suggestions (REST API)](/rest/api/searchservice/suggestions).
+[Autocomplete or suggested results](search-add-autocomplete-suggestions.md) are alternatives to **`search`** that fire successive query requests based on partial string inputs (after each character) in a search-as-you-type experience. You can use **`autocomplete`** and **`suggestions`** parameter together or separately, as described in [this walkthrough](tutorial-csharp-type-ahead-and-suggestions.md), but you can't use them with **`search`**. Both completed terms and suggested queries are derived from index contents. The engine never returns a string or suggestion that is nonexistent in your index. For more information, see [Autocomplete (REST API)](/rest/api/searchservice/autocomplete) and [Suggestions (REST API)](/rest/api/searchservice/suggestions).
## Filter search
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Last updated 09/27/2023
# Vector search in Azure Cognitive Search > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview), and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
+
+Vector search is an approach in information retrieval that uses numeric representations of content for search scenarios. Because the content is numeric rather than plain text, the search engine matches on vectors that are the most similar to the query, with no requirement for matching on exact terms.
This article is a high-level introduction to vector support in Azure Cognitive Search. It also explains integration with other Azure services and covers [terminology and concepts](#vector-search-concepts) related to vector search development.
We recommend this article for background, but if you'd rather get started, follo
> + [Load vector data](search-what-is-data-import.md) into an index using push or pull methodologies. > + [Query vector data](vector-search-how-to-query.md) using the Azure portal, preview REST APIs, or beta SDK packages.
-You could also begin with the [vector quickstart](search-get-started-vector.md) or the [code samples on GitHub](https://github.com/Azure/cognitive-search-vector-pr).
+You could also begin with the [vector quickstart](search-get-started-vector.md) or the [code samples on GitHub](https://github.com/Azure/cognitive-search-vector-pr).
-Support for vector search is in public preview and available through the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview), Azure portal, and the more recent beta packages of the Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4), [Python](https://pypi.org/project/azure-search-documents/11.4.0b8/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2).
+Vector support is in the Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4), [Python](https://pypi.org/project/azure-search-documents/11.4.0b8/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2).
## What's vector search in Cognitive Search?
Popular vector similarity metrics include the following, which are all supported
### Approximate Nearest Neighbors
-Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing. The specific approach depends on the algorithm. While this approach sacrifices some accuracy, these algorithms offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy and efficiency in modern information retrieval applications. You may adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application.
+Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing. The specific approach depends on the algorithm. While this approach sacrifices some accuracy, these algorithms offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy and efficiency in modern information retrieval applications. You can adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application.
Azure Cognitive Search uses Hierarchical Navigable Small Worlds (HNSW), which is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently.
static-web-apps Publish Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-azure-resource-manager.md
One of the parameters in the ARM template is `repositoryToken`, which allows the
1. Select **Generate New Token**.
-1. Provide a name for this token in the _Note_ field, for example *myfirstswadeployment*.
+1. Provide a name for this token in the _Name_ field, for example *myfirstswadeployment*.
+
+1. Select an _Expiration_ for the token, the default is 30 days.
1. Specify the following *scopes*: **repo, workflow, write:packages**
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
Previously updated : 10/21/2022 Last updated : 10/10/2023 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service.
Now that you've created an NFS share, to use it you have to mount it on your Lin
1. Select **File shares** from the storage account pane and select the NFS file share you created.
-1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a provided mounting script.
+1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a mounting script that contains the required mount options. For other recommended mount options, see [Mount NFS Azure file share on Linux](storage-files-how-to-mount-nfs-shares.md#mount-options).
> [!IMPORTANT] > The provided mounting script will mount the NFS share only until the Linux machine is rebooted. To automatically mount the share every time the machine reboots, see [Mount an NFS share using /etc/fstab](storage-files-how-to-mount-nfs-shares.md#mount-an-nfs-share-using-etcfstab).
storage Storage Files Quick Create Use Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md
Next, create an SMB Azure file share.
:::image type="content" source="media/storage-files-quick-create-use-windows/create-file-share.png" alt-text="Screenshot showing how to create a new file share."::: 1. Name the new file share *qsfileshare* and leave **Transaction optimized** selected for **Tier**.
-1. Select the **Backup** tab. By default, backup is enabled when you create an Azure file share using the Azure portal. If you want to disable backup for the file share, uncheck the **Enable backup** checkbox. If you want backup enabled, you can either leave the defaults or create a new Recovery Services Vault. To create a new backup policy, select **Create a new policy**.
+1. Select the **Backup** tab. By default, [backup is enabled](../../backup/backup-azure-files.md) when you create an Azure file share using the Azure portal. If you want to disable backup for the file share, uncheck the **Enable backup** checkbox. If you want backup enabled, you can either leave the defaults or create a new Recovery Services Vault in the same region and subscription as the storage account. To create a new backup policy, select **Create a new policy**.
:::image type="content" source="media/storage-files-quick-create-use-windows/create-file-share-backup.png" alt-text="Screenshot showing how to enable or disable file share backup." border="true":::
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
To create an Azure file share:
1. On the menu at the top of the **File shares** page, select **+ File share**. The **New file share** page drops down. 1. In **Name**, type *myshare*. Leave **Transaction optimized** selected for **Tier**.
-1. Select the **Backup** tab. If you want to enable backup for this file share, leave the defaults selected. If you don't want to enable backup, uncheck the **Enable backup** checkbox.
+1. Select the **Backup** tab. By default, [backup is enabled](../../backup/backup-azure-files.md) when you create an Azure file share using the Azure portal. If you want to disable backup for the file share, uncheck the **Enable backup** checkbox. If you want backup enabled, you can either leave the defaults or create a new Recovery Services Vault in the same region and subscription as the storage account. To create a new backup policy, select **Create a new policy**.
1. Select **Review + create** and then **Create** to create the Azure file share. File share names must be all lower-case letters, numbers, and single hyphens, and must begin and end with a lower-case letter or number. The name can't contain two consecutive hyphens. For details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata).
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes. This option is currently available via PowerShell only. The Virtual WAN por
The recommended Virtual WAN hub address space is /23. Virtual WAN hub assigns subnets to various gateways (ExpressRoute, site-to-site VPN, point-to-site VPN, Azure Firewall, Virtual hub Router). For scenarios where NVAs are deployed inside a virtual hub, a /28 is typically carved out for the NVA instances. However if the user were to provision multiple NVAs, a /27 subnet may be assigned. Therefore, keeping a future architecture in mind, while Virtual WAN hubs are deployed with a minimum size of /24, the recommended hub address space at creation time for user to input is /23.
-### Can you resize or change the address prefixes of a spoke virtual network connected to the Virtual WAN hub?
-
-No. This is currently not possible. To change the address prefixes of a spoke virtual network, remove the connection between the spoke virtual network and the Virtual WAN hub, modify the address spaces of the spoke virtual network, and then re-create the connection between the spoke virtual network and the Virtual WAN hub. Also, connecting 2 virtual networks with overlapping address spaces to the virtual hub is currently not supported.
- ### Is there support for IPv6 in Virtual WAN? IPv6 isn't supported in the Virtual WAN hub and its gateways. If you have a VNet that has IPv4 and IPv6 support and you would like to connect the VNet to Virtual WAN, this scenario not currently supported.