Updates from: 09/23/2023 01:13:48
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md
# Management concepts for user accounts, passwords, and administration in Microsoft Entra Domain Services
-When you create and run a Microsoft Entra Domain Services (AD DS) managed domain, there are some differences in behavior compared to a traditional on-premises AD DS environment. You use the same administrative tools in Microsoft Entra DS as a self-managed domain, but you can't directly access the domain controllers (DC). There's also some differences in behavior for password policies and password hashes depending on the source of the user account creation.
+When you create and run a Microsoft Entra Domain Services managed domain, there are some differences in behavior compared to a traditional on-premises AD DS environment. You use the same administrative tools in Domain Services as a self-managed domain, but you can't directly access the domain controllers (DC). There's also some differences in behavior for password policies and password hashes depending on the source of the user account creation.
This conceptual article details how to administer a managed domain and the different behavior of user accounts depending on the way they're created.
User accounts can be created in a managed domain in multiple ways. Most user acc
## Password policy
-Microsoft Entra DS includes a default password policy that defines settings for things like account lockout, maximum password age, and password complexity. Settings like account lockout policy apply to all users in a managed domain, regardless of how the user was created as outlined in the previous section. A few settings, like minimum password length and password complexity, only apply to users created directly in a managed domain.
+Domain Services includes a default password policy that defines settings for things like account lockout, maximum password age, and password complexity. Settings like account lockout policy apply to all users in a managed domain, regardless of how the user was created as outlined in the previous section. A few settings, like minimum password length and password complexity, only apply to users created directly in a managed domain.
You can create your own custom password policies to override the default policy in a managed domain. These custom policies can then be applied to specific groups of users as needed.
For more information on the differences in how password policies are applied dep
## Password hashes
-To authenticate users on the managed domain, Microsoft Entra DS needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Microsoft Entra DS for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
+To authenticate users on the managed domain, Domain Services needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Domain Services for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
-For cloud-only user accounts, users must change their passwords before they can use the managed domain. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The account isn't synchronized from Microsoft Entra ID to Microsoft Entra DS until the password is changed.
+For cloud-only user accounts, users must change their passwords before they can use the managed domain. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The account isn't synchronized from Microsoft Entra ID to Domain Services until the password is changed.
For users synchronized from an on-premises AD DS environment using Microsoft Entra Connect, [enable synchronization of password hashes][hybrid-phs]. > [!IMPORTANT]
-> Microsoft Entra Connect only synchronizes legacy password hashes when you enable Microsoft Entra DS for your Microsoft Entra tenant. Legacy password hashes aren't used if you only use Microsoft Entra Connect to synchronize an on-premises AD DS environment with Microsoft Entra ID.
+> Microsoft Entra Connect only synchronizes legacy password hashes when you enable Domain Services for your Microsoft Entra tenant. Legacy password hashes aren't used if you only use Microsoft Entra Connect to synchronize an on-premises AD DS environment with Microsoft Entra ID.
>
-> If your legacy applications don't use NTLM authentication or LDAP simple binds, we recommend that you disable NTLM password hash synchronization for Microsoft Entra DS. For more information, see [Disable weak cipher suites and NTLM credential hash synchronization][secure-domain].
+> If your legacy applications don't use NTLM authentication or LDAP simple binds, we recommend that you disable NTLM password hash synchronization for Domain Services. For more information, see [Disable weak cipher suites and NTLM credential hash synchronization][secure-domain].
-Once appropriately configured, the usable password hashes are stored in the managed domain. If you delete the managed domain, any password hashes stored at that point are also deleted. Synchronized credential information in Microsoft Entra ID can't be reused if you later create another managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Microsoft Entra ID needs to generate and store the password hashes in the new managed domain. For more information, see [Password hash sync process for Microsoft Entra DS and Microsoft Entra Connect][azure-ad-password-sync].
+Once appropriately configured, the usable password hashes are stored in the managed domain. If you delete the managed domain, any password hashes stored at that point are also deleted. Synchronized credential information in Microsoft Entra ID can't be reused if you later create another managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Microsoft Entra ID needs to generate and store the password hashes in the new managed domain. For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][azure-ad-password-sync].
> [!IMPORTANT] > Microsoft Entra Connect should only be installed and configured for synchronization with on-premises AD DS environments. It's not supported to install Microsoft Entra Connect in a managed domain to synchronize objects back to Microsoft Entra ID.
Once appropriately configured, the usable password hashes are stored in the mana
A *forest* is a logical construct used by Active Directory Domain Services (AD DS) to group one or more *domains*. The domains then store objects for user or groups, and provide authentication services.
-In Microsoft Entra DS, the forest only contains one domain. On-premises AD DS forests often contain many domains. In large organizations, especially after mergers and acquisitions, you may end up with multiple on-premises forests that each then contain multiple domains.
+In Domain Services, the forest only contains one domain. On-premises AD DS forests often contain many domains. In large organizations, especially after mergers and acquisitions, you may end up with multiple on-premises forests that each then contain multiple domains.
-By default, a managed domain is created as a *user* forest. This type of forest synchronizes all objects from Microsoft Entra ID, including any user accounts created in an on-premises AD DS environment. User accounts can directly authenticate against the managed domain, such as to sign in to a domain-joined VM. A user forest works when the password hashes can be synchronized and users aren't using exclusive sign-in methods like smart card authentication.
+By default, a managed domain synchronizes all objects from Microsoft Entra ID, including any user accounts created in an on-premises AD DS environment. User accounts can directly authenticate against the managed domain, such as to sign in to a domain-joined VM. This approach works when the password hashes can be synchronized and users aren't using exclusive sign-in methods like smart card authentication.
-In a Microsoft Entra DS *resource* forest, users authenticate over a one-way forest *trust* from their on-premises AD DS. With this approach, the user objects and password hashes aren't synchronized to Microsoft Entra DS. The user objects and credentials only exist in the on-premises AD DS. This approach lets enterprises host resources and application platforms in Azure that depend on classic authentication such LDAPS, Kerberos, or NTLM, but any authentication issues or concerns are removed.
+In a Domain Services, you can also create a one-way forest *trust* to let users sign in from their on-premises AD DS. With this approach, the user objects and password hashes aren't synchronized to Domain Services. The user objects and credentials only exist in the on-premises AD DS. This approach lets enterprises host resources and application platforms in Azure that depend on classic authentication such LDAPS, Kerberos, or NTLM, but any authentication issues or concerns are removed.
<a name='azure-ad-ds-skus'></a>
-## Microsoft Entra DS SKUs
+## Domain Services SKUs
-In Microsoft Entra DS, the available performance and features are based on the SKU. You select a SKU when you create the managed domain, and you can switch SKUs as your business requirements change after the managed domain has been deployed. The following table outlines the available SKUs and the differences between them:
+In Domain Services, the available performance and features are based on the SKU. You select a SKU when you create the managed domain, and you can switch SKUs as your business requirements change after the managed domain has been deployed. The following table outlines the available SKUs and the differences between them:
| SKU name | Maximum object count | Backup frequency | ||-||
In Microsoft Entra DS, the available performance and features are based on the S
| Enterprise | Unlimited | Every 3 days | | Premium | Unlimited | Daily |
-Before these Microsoft Entra DS SKUs, a billing model based on the number of objects (user and computer accounts) in the managed domain was used. There is no longer variable pricing based on the number of objects in the managed domain.
+Before these Domain Services SKUs, a billing model based on the number of objects (user and computer accounts) in the managed domain was used. There is no longer variable pricing based on the number of objects in the managed domain.
-For more information, see the [Microsoft Entra DS pricing page][pricing].
+For more information, see the [Domain Services pricing page][pricing].
### Managed domain performance
As the SKU level increases, the frequency of those backup snapshots increases. R
## Next steps
-To get started, [create a Microsoft Entra DS managed domain][create-instance].
+To get started, [create a Domain Services managed domain][create-instance].
<!-- INTERNAL LINKS --> [password-policy]: password-policy.md
active-directory-domain-services Alert Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-ldaps.md
# Known issues: Secure LDAP alerts in Microsoft Entra Domain Services
-Applications and services that use lightweight directory access protocol (LDAP) to communicate with Microsoft Entra Domain Services (Microsoft Entra DS) can be [configured to use secure LDAP](tutorial-configure-ldaps.md). An appropriate certificate and required network ports must be open for secure LDAP to work correctly.
+Applications and services that use lightweight directory access protocol (LDAP) to communicate with Microsoft Entra Domain Services can be [configured to use secure LDAP](tutorial-configure-ldaps.md). An appropriate certificate and required network ports must be open for secure LDAP to work correctly.
-This article helps you understand and resolve common alerts with secure LDAP access in Microsoft Entra DS.
+This article helps you understand and resolve common alerts with secure LDAP access in Domain Services.
## AADDS101: Secure LDAP network configuration
This article helps you understand and resolve common alerts with secure LDAP acc
### Resolution
-When you enable secure LDAP, it's recommended to create additional rules that restrict inbound LDAPS access to specific IP addresses. These rules protect the managed domain from brute force attacks. To update the network security group to restrict TCP port 636 access for secure LDAP, complete the following steps:
+When you enable secure LDAP, it's recommended to create extra rules that restrict inbound LDAPS access to specific IP addresses. These rules protect the managed domain from brute force attacks. To update the network security group to restrict TCP port 636 access for secure LDAP, complete the following steps:
1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Network security groups**. 1. Choose the network security group associated with your managed domain, such as *AADDS-contoso.com-NSG*, then select **Inbound security rules**
When you enable secure LDAP, it's recommended to create additional rules that re
The managed domain's health automatically updates itself within two hours and removes the alert. > [!TIP]
-> TCP port 636 isn't the only rule needed for Microsoft Entra DS to run smoothly. To learn more, see the [Microsoft Entra DS Network security groups and required ports](network-considerations.md#network-security-groups-and-required-ports).
+> TCP port 636 isn't the only rule needed for Domain Services to run smoothly. To learn more, see the [Domain Services Network security groups and required ports](network-considerations.md#network-security-groups-and-required-ports).
## AADDS502: Secure LDAP certificate expiring
The managed domain's health automatically updates itself within two hours and re
### Resolution
-Create a replacement secure LDAP certificate by following the steps to [create a certificate for secure LDAP](tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap). Apply the replacement certificate to Microsoft Entra DS, and distribute the certificate to any clients that connect using secure LDAP.
+Create a replacement secure LDAP certificate by following the steps to [create a certificate for secure LDAP](tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap). Apply the replacement certificate to Domain Services, and distribute the certificate to any clients that connect using secure LDAP.
## Next steps
-If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+If you still have issues, [open an Azure support request][azure-support] for more troubleshooting help.
<!-- INTERNAL LINKS --> [azure-support]: ../active-directory/fundamentals/how-to-get-support.md
active-directory-domain-services Alert Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-nsg.md
Title: Resolve network security group alerts in Microsoft Entra DS | Microsoft Docs
+ Title: Resolve network security group alerts in Microsoft Entra Domain Services | Microsoft Docs
description: Learn how to troubleshoot and resolve network security group configuration alerts for Microsoft Entra Domain Services
# Known issues: Network configuration alerts in Microsoft Entra Domain Services
-To let applications and services correctly communicate with a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, specific network ports must be open to allow traffic to flow. In Azure, you control the flow of traffic using network security groups. The health status of a Microsoft Entra DS managed domain shows an alert if the required network security group rules aren't in place.
+To let applications and services correctly communicate with a Microsoft Entra Domain Services managed domain, specific network ports must be open to allow traffic to flow. In Azure, you control the flow of traffic using network security groups. The health status of a Domain Services managed domain shows an alert if the required network security group rules aren't in place.
This article helps you understand and resolve common alerts for network security group configuration issues.
This article helps you understand and resolve common alerts for network security
*Microsoft is unable to reach the domain controllers for this managed domain. This may happen if a network security group (NSG) configured on your virtual network blocks access to the managed domain. Another possible reason is if there is a user-defined route that blocks incoming traffic from the internet.*
-Invalid network security group rules are the most common cause of network errors for Microsoft Entra DS. The network security group for the virtual network must allow access to specific ports and protocols. If these ports are blocked, the Azure platform can't monitor or update the managed domain. The synchronization between the Microsoft Entra directory and Microsoft Entra DS is also impacted. Make sure you keep the default ports open to avoid interruption in service.
+Invalid network security group rules are the most common cause of network errors for Domain Services. The network security group for the virtual network must allow access to specific ports and protocols. If these ports are blocked, the Azure platform can't monitor or update the managed domain. The synchronization between the Microsoft Entra directory and Domain Services is also impacted. Make sure you keep the default ports open to avoid interruption in service.
## Default security rules
-The following default inbound and outbound security rules are applied to the network security group for a managed domain. These rules keep Microsoft Entra DS secure and allow the Azure platform to monitor, manage, and update the managed domain.
+The following default inbound and outbound security rules are applied to the network security group for a managed domain. These rules keep Domain Services secure and allow the Azure platform to monitor, manage, and update the managed domain.
### Inbound security rules
The following default inbound and outbound security rules are applied to the net
| 65500 | DenyAllOutBound | Any | Any | Any | Any | Deny | >[!NOTE]
-> Microsoft Entra DS needs unrestricted outbound access from the virtual network. We don't recommend that you create any additional rules that restrict outbound access for the virtual network.
+> Domain Services needs unrestricted outbound access from the virtual network. We don't recommend that you create any additional rules that restrict outbound access for the virtual network.
## Verify and edit existing security rules
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
# Known issues: Service principal alerts in Microsoft Entra Domain Services
-[Service principals](../active-directory/develop/app-objects-and-service-principals.md) are applications that the Azure platform uses to manage, update, and maintain a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. If a service principal is deleted, functionality in the managed domain is impacted.
+[Service principals](../active-directory/develop/app-objects-and-service-principals.md) are applications that the Azure platform uses to manage, update, and maintain a Microsoft Entra Domain Services managed domain. If a service principal is deleted, functionality in the managed domain is impacted.
This article helps you troubleshoot and resolve service principal-related configuration alerts.
The managed domain's health automatically updates itself within two hours and re
*The service principal with the application ID "d87dcbc6-a371-462e-88e3-28ad15ec4e64" was deleted and then recreated. The recreation leaves behind inconsistent permissions on Microsoft Entra Domain Services resources needed to service your managed domain. Synchronization of passwords on your managed domain could be affected.*
-Microsoft Entra DS automatically synchronizes user accounts and credentials from Microsoft Entra ID. If there's a problem with the Microsoft Entra application used for this process, credential synchronization between Microsoft Entra DS and Microsoft Entra ID fails.
+Domain Services automatically synchronizes user accounts and credentials from Microsoft Entra ID. If there's a problem with the Microsoft Entra application used for this process, credential synchronization between Domain Services and Microsoft Entra ID fails.
### Resolution
active-directory-domain-services Change Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/change-sku.md
# Change the SKU for an existing Microsoft Entra Domain Services managed domain
-In Microsoft Entra Domain Services (Microsoft Entra DS), the available performance and features are based on the SKU type. These feature differences include the backup frequency or maximum number of one-way outbound forest trusts.
+In Microsoft Entra Domain Services, the available performance and features are based on the SKU type. These feature differences include the backup frequency or maximum number of one-way outbound forest trusts.
-You select a SKU when you create the managed domain, and you can switch SKUs up or down as your business needs change after the managed domain has been deployed. Changes in business requirements could include the need for more frequent backups or to create additional forest trusts. For more information on the limits and pricing of the different SKUs, see [Microsoft Entra DS SKU concepts][concepts-sku] and [Microsoft Entra DS pricing][pricing] pages.
+You select a SKU when you create the managed domain, and you can switch SKUs up or down as your business needs change after the managed domain has been deployed. Changes in business requirements could include the need for more frequent backups or to create additional forest trusts. For more information on the limits and pricing of the different SKUs, see [Domain Services SKU concepts][concepts-sku] and [Domain Services pricing][pricing] pages.
-This article shows you how to change the SKU for an existing Microsoft Entra DS managed domain using the [Microsoft Entra admin center](https://entra.microsoft.com).
+This article shows you how to change the SKU for an existing Domain Services managed domain using the [Microsoft Entra admin center](https://entra.microsoft.com).
## Before you begin
You can change SKUs up or down after the managed domain has been deployed. Howev
For example, if you have created seven trusts on the *Premium* SKU, you can't change down to the *Enterprise* SKU. The *Enterprise* SKU supports a maximum of five trusts.
-For more information on these limits, see [Microsoft Entra DS SKU features and limits][concepts-sku].
+For more information on these limits, see [Domain Services SKU features and limits][concepts-sku].
## Select a new SKU To change the SKU for a managed domain using the [Microsoft Entra admin center](https://entra.microsoft.com), complete the following steps: 1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Microsoft Entra Domain Services**. Choose your managed domain from the list, such as *aaddscontoso.com*.
-1. In the menu on the left-hand side of the Microsoft Entra DS page, select **Settings > SKU**.
+1. In the menu on the left-hand side of the Domain Services page, select **Settings > SKU**.
- ![Select the SKU menu option for your Microsoft Entra DS managed domain in the Microsoft Entra admin center](media/change-sku/overview-change-sku.png)
+ ![Select the SKU menu option for your Domain Services managed domain in the Microsoft Entra admin center](media/change-sku/overview-change-sku.png)
1. From the drop-down menu, select the SKU you wish for your managed domain. If you have a resource forest, you can't select *Standard* SKU as forest trusts are only available on the *Enterprise* SKU or higher.
It can take a minute or two to change the SKU type.
## Next steps
-If you have a resource forest and want to create additional trusts after the SKU change, see [Create an outbound forest trust to an on-premises domain in Microsoft Entra DS][create-trust].
+If you have a resource forest and want to create additional trusts after the SKU change, see [Create an outbound forest trust to an on-premises domain in Domain Services][create-trust].
<!-- INTERNAL LINKS --> [create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
active-directory-domain-services Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/check-health.md
Title: Check the health of Microsoft Entra Domain Services | Microsoft Docs
-description: Learn how to check the health of a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain and understand status messages.
+description: Learn how to check the health of a Microsoft Entra Domain Services managed domain and understand status messages.
# Check the health of a Microsoft Entra Domain Services managed domain
-Microsoft Entra Domain Services (Microsoft Entra DS) runs some background tasks to keep the managed domain healthy and up-to-date. These tasks include taking backups, applying security updates, and synchronizing data from Microsoft Entra ID. If there are issues with the Microsoft Entra DS managed domain, these tasks may not successfully complete. To review and resolve any issues, you can check the health status of a managed domain using the Microsoft Entra admin center.
+Microsoft Entra Domain Services runs some background tasks to keep the managed domain healthy and up-to-date. These tasks include taking backups, applying security updates, and synchronizing data from Microsoft Entra ID. If there are issues with the Domain Services managed domain, these tasks may not successfully complete. To review and resolve any issues, you can check the health status of a managed domain using the Microsoft Entra admin center.
-This article shows you how to view the Microsoft Entra DS health status and understand the information or alerts shown.
+This article shows you how to view the Domain Services health status and understand the information or alerts shown.
## View the health status
The health status for a managed domain is viewed using the Microsoft Entra admin
1. Sign in to [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator). 1. Search for and select **Microsoft Entra Domain Services**. 1. Select your managed domain, such as *aaddscontoso.com*.
-1. On the left-hand side of the Microsoft Entra DS resource window, select **Health**. The following example screenshot shows a healthy managed domain and the status of the last backup and Azure AD synchronization:
+1. On the left-hand side of the Domain Services resource window, select **Health**. The following example screenshot shows a healthy managed domain and the status of the last backup and Azure AD synchronization:
![Health page overview showing the Microsoft Entra Domain Services status](./media/check-health/health-page.png)
The health status for a managed domain show two types of information - *monitors
### Monitors
-Monitors are areas of a managed domain that are checked on a regular basis. If there are any active alerts for the managed domain, it may cause one of the monitors to report an issue. Microsoft Entra DS currently has monitors for the following areas:
+Monitors are areas of a managed domain that are checked on a regular basis. If there are any active alerts for the managed domain, it may cause one of the monitors to report an issue. Domain Services currently has monitors for the following areas:
* Backup * Synchronization with Microsoft Entra ID
active-directory-domain-services Compare Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/compare-identity-solutions.md
Title: Compare Active Directory-based services in Azure | Microsoft Docs
+ Title: Compare Microsoft directory-based services | Microsoft Docs
description: In this overview, you compare the different identity offerings for Active Directory Domain Services, Microsoft Entra ID, and Microsoft Entra Domain Services.
Last updated 09/13/2023
-#Customer intent: As an IT administrator or decision maker, I want to understand the differences between Active Directory Domain Services (AD DS), Microsoft Entra ID, and Microsoft Entra DS so I can choose the most appropriate identity solution for my organization.
+#Customer intent: As an IT administrator or decision maker, I want to understand the differences between Active Directory Domain Services (AD DS), Microsoft Entra ID, and Domain Services so I can choose the most appropriate identity solution for my organization.
# Compare self-managed Active Directory Domain Services, Microsoft Entra ID, and managed Microsoft Entra Domain Services
Although the three Active Directory-based identity solutions share a common name
* **Microsoft Entra ID** - Cloud-based identity and mobile device management that provides user account and authentication services for resources such as Microsoft 365, the Microsoft Entra admin center, or SaaS applications. * Microsoft Entra ID can be synchronized with an on-premises AD DS environment to provide a single identity to users that works natively in the cloud. * For more information about Microsoft Entra ID, see [What is Microsoft Entra ID?][whatis-azuread]
-* **Microsoft Entra Domain Services (Microsoft Entra DS)** - Provides managed domain services with a subset of fully compatible traditional AD DS features such as domain join, group policy, LDAP, and Kerberos / NTLM authentication.
- * Microsoft Entra DS integrates with Microsoft Entra ID, which itself can synchronize with an on-premises AD DS environment. This ability extends central identity use cases to traditional web applications that run in Azure as part of a lift-and-shift strategy.
+* **Microsoft Entra Domain Services** - Provides managed domain services with a subset of fully compatible traditional AD DS features such as domain join, group policy, LDAP, and Kerberos / NTLM authentication.
+ * Domain Services integrates with Microsoft Entra ID, which itself can synchronize with an on-premises AD DS environment. This ability extends central identity use cases to traditional web applications that run in Azure as part of a lift-and-shift strategy.
* To learn more about synchronization with Microsoft Entra ID and on-premises, see [How objects and credentials are synchronized in a managed domain][synchronization]. This overview article compares and contrasts how these identity solutions can work together, or would be used independently, depending on the needs of your organization. > [!div class="nextstepaction"]
-> [To get started, create a Microsoft Entra DS managed domain using the Microsoft Entra admin center][tutorial-create]
+> [To get started, create a Domain Services managed domain using the Microsoft Entra admin center][tutorial-create]
<a name='azure-ad-ds-and-self-managed-ad-ds'></a>
-## Microsoft Entra DS and self-managed AD DS
+## Domain Services and self-managed AD DS
If you have applications and services that need access to traditional authentication mechanisms such as Kerberos or NTLM, there are two ways to provide Active Directory Domain Services in the cloud:
-* A *managed domain* that you create using Microsoft Entra Domain Services (Microsoft Entra DS). Microsoft creates and manages the required resources.
+* A *managed domain* that you create using Microsoft Entra Domain Services. Microsoft creates and manages the required resources.
* A *self-managed* domain that you create and configure using traditional resources such as virtual machines (VMs), Windows Server guest OS, and Active Directory Domain Services (AD DS). You then continue to administer these resources.
-With Microsoft Entra DS, the core service components are deployed and maintained for you by Microsoft as a *managed* domain experience. You don't deploy, manage, patch, and secure the AD DS infrastructure for components like the VMs, Windows Server OS, or domain controllers (DCs).
+With Domain Services, the core service components are deployed and maintained for you by Microsoft as a *managed* domain experience. You don't deploy, manage, patch, and secure the AD DS infrastructure for components like the VMs, Windows Server OS, or domain controllers (DCs).
-Microsoft Entra DS provides a smaller subset of features to traditional self-managed AD DS environment, which reduces some of the design and management complexity. For example, there are no AD forests, domain, sites, and replication links to design and maintain. You can still [create forest trusts between Microsoft Entra DS and on-premises environments][create-forest-trust].
+Domain Services provides a smaller subset of features to traditional self-managed AD DS environment, which reduces some of the design and management complexity. For example, there are no AD forests, domain, sites, and replication links to design and maintain. You can still [create forest trusts between Domain Services and on-premises environments][create-forest-trust].
-For applications and services that run in the cloud and need access to traditional authentication mechanisms such as Kerberos or NTLM, Microsoft Entra DS provides a managed domain experience with the minimal amount of administrative overhead. For more information, see [Management concepts for user accounts, passwords, and administration in Microsoft Entra DS][administration-concepts].
+For applications and services that run in the cloud and need access to traditional authentication mechanisms such as Kerberos or NTLM, Domain Services provides a managed domain experience with the minimal amount of administrative overhead. For more information, see [Management concepts for user accounts, passwords, and administration in Domain Services][administration-concepts].
When you deploy and run a self-managed AD DS environment, you have to maintain all of the associated infrastructure and directory components. There's additional maintenance overhead with a self-managed AD DS environment, but you're then able to do additional tasks such as extend the schema or create forest trusts.
Common deployment models for a self-managed AD DS environment that provides iden
* **Extend on-premises domain to Azure** - An Azure virtual network connects to an on-premises network using a VPN / ExpressRoute connection. Azure VMs connect to this Azure virtual network, which lets them domain-join to the on-premises AD DS environment. * An alternative is to create Azure VMs and promote them as replica domain controllers from the on-premises AD DS domain. These domain controllers replicate over a VPN / ExpressRoute connection to the on-premises AD DS environment. The on-premises AD DS domain is effectively extended into Azure.
-The following table outlines some of the features you may need for your organization, and the differences between a managed Microsoft Entra DS domain or a self-managed AD DS domain:
+The following table outlines some of the features you may need for your organization, and the differences between a managed domain or a self-managed AD DS domain:
-| **Feature** | **Microsoft Entra DS** | **Self-managed AD DS** |
+| **Feature** | **Managed domain** | **Self-managed AD DS** |
| -- |::|:-:| | **Managed service** | **&#x2713;** | **&#x2715;** | | **Secure deployments** | **&#x2713;** | Administrator secures the deployment |
The following table outlines some of the features you may need for your organiza
<a name='azure-ad-ds-and-azure-ad'></a>
-## Microsoft Entra DS and Microsoft Entra ID
+## Domain Services and Microsoft Entra ID
Microsoft Entra ID lets you manage the identity of devices used by the organization and control access to corporate resources from those devices. Users can also register their personal device (a bring-your-own (BYO) model) with Microsoft Entra ID, which provides the device with an identity. Microsoft Entra ID then authenticates the device when a user signs in to Microsoft Entra ID and uses the device to access secured resources. The device can be managed using Mobile Device Management (MDM) software like Microsoft Intune. This management ability lets you restrict access to sensitive resources to managed and policy-compliant devices.
Devices can be joined to Microsoft Entra ID with or without a hybrid deployment
On a Microsoft Entra joined or registered device, user authentication happens using modern OAuth / OpenID Connect based protocols. These protocols are designed to work over the internet, so are great for mobile scenarios where users access corporate resources from anywhere.
-With Microsoft Entra DS-joined devices, applications can use the Kerberos and NTLM protocols for authentication, so can support legacy applications migrated to run on Azure VMs as part of a lift-and-shift strategy. The following table outlines differences in how the devices are represented and can authenticate themselves against the directory:
+With Domain Services-joined devices, applications can use the Kerberos and NTLM protocols for authentication, so can support legacy applications migrated to run on Azure VMs as part of a lift-and-shift strategy. The following table outlines differences in how the devices are represented and can authenticate themselves against the directory:
-| **Aspect** | **Microsoft Entra joined** | **Microsoft Entra DS-joined** |
+| **Aspect** | **Microsoft Entra joined** | **Domain Services-joined** |
|:--| | - |
-| Device controlled by | Microsoft Entra ID | Microsoft Entra DS managed domain |
-| Representation in the directory | Device objects in the Microsoft Entra directory | Computer objects in the Microsoft Entra DS managed domain |
+| Device controlled by | Microsoft Entra ID | Domain Services managed domain |
+| Representation in the directory | Device objects in the Microsoft Entra directory | Computer objects in the Domain Services managed domain |
| Authentication | OAuth / OpenID Connect based protocols | Kerberos and NTLM protocols | | Management | Mobile Device Management (MDM) software like Intune | Group Policy | | Networking | Works over the internet | Must be connected to, or peered with, the virtual network where the managed domain is deployed | | Great for... | End-user mobile or desktop devices | Server VMs deployed in Azure |
-If on-premises AD DS and Microsoft Entra ID are configured for federated authentication using AD FS, then there's no (current/valid) password hash available in Azure DS. Microsoft Entra user accounts created before fed auth was implemented might have an old password hash but this likely doesn't match a hash of their on-premises password. Hence Microsoft Entra DS won't be able to validate the users credentials
+If on-premises AD DS and Microsoft Entra ID are configured for federated authentication using AD FS, then there's no (current/valid) password hash available in Azure DS. Microsoft Entra user accounts created before fed auth was implemented might have an old password hash but this likely doesn't match a hash of their on-premises password. As a result, Domain Services won't be able to validate the users credentials
## Next steps
-To get started with using Microsoft Entra DS, [create a Microsoft Entra DS managed domain using the Microsoft Entra admin center][tutorial-create].
+To get started with using Domain Services, [create a Domain Services managed domain using the Microsoft Entra admin center][tutorial-create].
You can also learn more about
-[management concepts for user accounts, passwords, and administration in Microsoft Entra DS][administration-concepts] and [how objects and credentials are synchronized in a managed domain][synchronization].
+[management concepts for user accounts, passwords, and administration in Domain Services][administration-concepts] and [how objects and credentials are synchronized in a managed domain][synchronization].
<!-- INTERNAL LINKS --> [manage-dns]: manage-dns.md
active-directory-domain-services Concepts Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-custom-attributes.md
Title: Create and manage custom attributes for Microsoft Entra Domain Services | Microsoft Docs
-description: Learn how to create and manage custom attributes in a Microsoft Entra DS managed domain.
+description: Learn how to create and manage custom attributes in a Domain Services managed domain.
Previously updated : 03/07/2023 Last updated : 09/21/2023
For various reasons, companies often canΓÇÖt modify code for legacy apps. For example, apps may use a custom attribute, such as a custom employee ID, and rely on that attribute for LDAP operations.
-Microsoft Entra ID supports adding custom data to resources using [extensions](/graph/extensibility-overview). Microsoft Entra Domain Services (Microsoft Entra DS) can synchronize the following types of extensions from Microsoft Entra ID, so you can also use apps that depend on custom attributes with Microsoft Entra DS:
+Microsoft Entra ID supports adding custom data to resources using [extensions](/graph/extensibility-overview). Microsoft Entra Domain Services can synchronize the following types of extensions from Microsoft Entra ID, so you can also use apps that depend on custom attributes with Domain
- [onPremisesExtensionAttributes](/graph/extensibility-overview?tabs=http#extension-attributes) are a set of 15 attributes that can store extended user string attributes. - [Directory extensions](/graph/extensibility-overview?tabs=http#directory-azure-ad-extensions) allow the schema extension of specific directory objects, such as users and groups, with strongly typed attributes through registration with an application in the tenant.
Click **Select**, and then **Save** to confirm the change.
:::image type="content" border="true" source="./media/concepts-custom-attributes/select.png" alt-text="Screenshot of how to save directory extension attributes.":::
-Microsoft Entra DS back fills all synchronized users and groups with the onboarded custom attribute values. The custom attribute values gradually populate for objects that contain the directory extension in Microsoft Entra ID. During the backfill synchronization process, incremental changes in Microsoft Entra ID are paused, and the sync time depends on the size of the tenant.
+Domain Services back fills all synchronized users and groups with the onboarded custom attribute values. The custom attribute values gradually populate for objects that contain the directory extension in Microsoft Entra ID. During the backfill synchronization process, incremental changes in Microsoft Entra ID are paused, and the sync time depends on the size of the tenant.
-To check the backfilling status, click **Microsoft Entra DS Health** and verify the **Synchronization with Microsoft Entra ID** monitor has an updated timestamp within an hour since onboarding. Once updated, the backfill is complete.
+To check the backfilling status, click **Domain Services Health** and verify the **Synchronization with Microsoft Entra ID** monitor has an updated timestamp within an hour since onboarding. Once updated, the backfill is complete.
## Next steps
active-directory-domain-services Concepts Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-forest-trust.md
The access control mechanisms provided by AD DS and the Windows distributed secu
The trust path is implemented by the Net Logon service using an authenticated remote procedure call (RPC) connection to the trusted domain authority. A secured channel also extends to other AD DS domains through interdomain trust relationships. This secured channel is used to obtain and verify security information, including security identifiers (SIDs) for users and groups. >[!NOTE]
->Microsoft Entra DS only supports one-way transitive trusts where the managed domain will trust other domains, but no other directions or trust types are supported.
+>Domain Services only supports one-way transitive trusts where the managed domain will trust other domains, but no other directions or trust types are supported.
-For an overview of how trusts apply to Microsoft Entra DS, see [Forest concepts and features][create-forest-trust].
+For an overview of how trusts apply to Domain Services, see [Forest concepts and features][create-forest-trust].
-To get started using trusts in Microsoft Entra DS, [create a managed domain that uses forest trusts][tutorial-create-advanced].
+To get started using trusts in Domain Services, [create a managed domain that uses forest trusts][tutorial-create-advanced].
## Trust relationship flows
When two forests are connected by a forest trust, authentication requests made u
When a forest trust is first established, each forest collects all of the trusted namespaces in its partner forest and stores the information in a [trusted domain object](#trusted-domain-object). Trusted namespaces include domain tree names, user principal name (UPN) suffixes, service principal name (SPN) suffixes, and security ID (SID) namespaces used in the other forest. TDO objects are replicated to the global catalog. >[!NOTE]
->Alternate UPN suffixes on trusts are not supported. If an on-premises domain uses the same UPN suffix as Microsoft Entra DS, sign in must use **sAMAccountName**.
+>Alternate UPN suffixes on trusts are not supported. If an on-premises domain uses the same UPN suffix as Domain Services, sign in must use **sAMAccountName**.
Before authentication protocols can follow the forest trust path, the service principal name (SPN) of the resource computer must be resolved to a location in the other forest. An SPN can be one of the following names:
Administrators can use *Active Directory Domains and Trusts*, *Netdom* and *Nlte
## Next steps
-To get started with creating a managed domain with a forest trust, see [Create and configure a Microsoft Entra DS managed domain][tutorial-create-advanced]. You can then [Create an outbound forest trust to an on-premises domain][create-forest-trust].
+To get started with creating a managed domain with a forest trust, see [Create and configure a Domain Services managed domain][tutorial-create-advanced]. You can then [Create an outbound forest trust to an on-premises domain][create-forest-trust].
<!-- LINKS - INTERNAL --> [tutorial-create-advanced]: tutorial-create-instance-advanced.md
active-directory-domain-services Concepts Replica Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-replica-sets.md
Previously updated : 01/29/2023 Last updated : 09/23/2023 # Replica sets concepts and features for Microsoft Entra Domain Services
-When you create a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, you define a unique namespace. This namespace is the domain name, such as *aaddscontoso.com*, and two domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set.
+When you create a Microsoft Entra Domain Services managed domain, you define a unique namespace. This namespace is the domain name, such as *aaddscontoso.com*, and two domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set.
-You can expand a managed domain to have more than one replica set per Microsoft Entra tenant. Replica sets can be added to any peered virtual network in any Azure region that supports Microsoft Entra DS. Additional replica sets in different Azure regions provide geographical disaster recovery for legacy applications if an Azure region goes offline.
+You can expand a managed domain to have more than one replica set per Microsoft Entra tenant. Replica sets can be added to any peered virtual network in any Azure region that supports Domain Services. Additional replica sets in different Azure regions provide geographical disaster recovery for legacy applications if an Azure region goes offline.
> [!NOTE] > Replica sets don't let you deploy multiple unique managed domains in a single Azure tenant. Each replica set contains the same data. ## How replica sets work
-When you create a managed domain, such as *aaddscontoso.com*, an initial replica set is created. Additional replica sets share the same namespace and configuration. Changes to Microsoft Entra DS, including configuration, user identity and credentials, groups, group policy objects, computer objects, and other changes are applied to all replica sets in the managed domain using AD DS replication.
+When you create a managed domain, such as *aaddscontoso.com*, an initial replica set is created. Additional replica sets share the same namespace and configuration. Changes to Domain Services, including configuration, user identity and credentials, groups, group policy objects, computer objects, and other changes are applied to all replica sets in the managed domain using AD DS replication.
You create each replica set in a virtual network. Each virtual network must be peered to every other virtual network that hosts a managed domain's replica set. This configuration creates a mesh network topology that supports directory replication. A virtual network can support multiple replica sets, provided that each replica set is in a different virtual subnet.
Changes within the managed domain work just like they previously did. You [creat
## Next steps
-To get started with replica sets, [create and configure a Microsoft Entra DS managed domain][tutorial-create-advanced]. When deployed, [create and use additional replica sets][create-replica-set].
+To get started with replica sets, [create and configure a Domain Services managed domain][tutorial-create-advanced]. When deployed, [create and use additional replica sets][create-replica-set].
<!-- LINKS - INTERNAL --> [tutorial-create-advanced]: tutorial-create-instance-advanced.md
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
# Create a Microsoft Entra Domain Services forest trust to an on-premises domain using Azure PowerShell
-In environments where you can't synchronize password hashes, or you have users that exclusively sign in using smart cards so they don't know their password, you can create a one-way outbound trust from Microsoft Entra Domain Services (Microsoft Entra DS) to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Microsoft Entra DS managed domain. In this case, on-premises password hashes are never synchronized.
+In environments where you can't synchronize password hashes, or you have users that exclusively sign in using smart cards so they don't know their password, you can create a one-way outbound trust from Microsoft Entra Domain Services to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Domain Services managed domain. In this case, on-premises password hashes are never synchronized.
-![Diagram of forest trust from Microsoft Entra DS to on-premises AD DS](./media/create-forest-powershell/forest-trust-relationship.png)
+![Diagram of forest trust from Domain Services to on-premises AD DS](./media/create-forest-powershell/forest-trust-relationship.png)
In this article, you learn how to: > [!div class="checklist"]
-> * Create a Microsoft Entra DS forest using Azure PowerShell
+> * Create a Domain Services forest using Azure PowerShell
> * Create a one-way outbound forest trust in the managed domain using Azure PowerShell > * Configure DNS in an on-premises AD DS environment to support managed domain connectivity > * Create a one-way inbound forest trust in an on-premises AD DS environment
To complete this article, you need the following resources and privileges:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Microsoft Entra DS.
-* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Microsoft Entra DS resources.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Domain Services resources.
## Sign in to the Microsoft Entra admin center
Before you start, make sure you understand the [network considerations, forest n
## Create the Microsoft Entra service principal
-Microsoft Entra DS requires a service principal synchronize data from Microsoft Entra ID. This principal must be created in your Microsoft Entra tenant before you created the managed domain forest.
+Domain Services requires a service principal synchronize data from Microsoft Entra ID. This principal must be created in your Microsoft Entra tenant before you created the managed domain forest.
-Create a Microsoft Entra service principal for Microsoft Entra DS to communicate and authenticate itself. A specific application ID is used named *Domain Controller Services* with an ID of *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. Don't change this application ID.
+Create a Microsoft Entra service principal for Domain Services to communicate and authenticate itself. A specific application ID is used named *Domain Controller Services* with an ID of *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. Don't change this application ID.
Create a Microsoft Entra service principal using the [New-AzureADServicePrincipal][New-AzureADServicePrincipal] cmdlet:
To create a managed domain, you use the `New-AzureAaddsForest` script. This scri
| Name | Script parameter | Description | |:--||:|
- | Subscription | *-azureSubscriptionId* | Subscription ID used for Microsoft Entra DS billing. You can get the list of subscriptions using the [Get-AzureRMSubscription][Get-AzureRMSubscription] cmdlet. |
+ | Subscription | *-azureSubscriptionId* | Subscription ID used for Domain Services billing. You can get the list of subscriptions using the [Get-AzureRMSubscription][Get-AzureRMSubscription] cmdlet. |
| Resource Group | *-aaddsResourceGroupName* | Name of the resource group for the managed domain and associated resources. |
- | Location | *-aaddsLocation* | The Azure region to host your managed domain. For available regions, see [supported regions for Microsoft Entra DS.](https://azure.microsoft.com/global-infrastructure/services/?products=active-directory-ds&regions=all) |
- | Microsoft Entra DS administrator | *-aaddsAdminUser* | The user principal name of the first managed domain administrator. This account must be an existing cloud user account in your Microsoft Entra ID. The user, and the user running the script, is added to the *AAD DC Administrators* group. |
- | Microsoft Entra DS domain name | *-aaddsDomainName* | The FQDN of the managed domain, based on the previous guidance on how to choose a forest name. |
+ | Location | *-aaddsLocation* | The Azure region to host your managed domain. For available regions, see [supported regions for Domain Services.](https://azure.microsoft.com/global-infrastructure/services/?products=active-directory-ds&regions=all) |
+ | Domain Services administrator | *-aaddsAdminUser* | The user principal name of the first managed domain administrator. This account must be an existing cloud user account in your Microsoft Entra ID. The user, and the user running the script, is added to the *AAD DC Administrators* group. |
+ | Domain Services domain name | *-aaddsDomainName* | The FQDN of the managed domain, based on the previous guidance on how to choose a forest name. |
- The `New-AzureAaddsForest` script can create the Azure virtual network and Microsoft Entra DS subnet if these resources don't already exist. The script can optionally create the workload subnets, when specified:
+ The `New-AzureAaddsForest` script can create the Azure virtual network and Domain Services subnet if these resources don't already exist. The script can optionally create the workload subnets, when specified:
| Name | Script parameter | Description | |:-|:-|:| | Virtual network name | *-aaddsVnetName* | Name of the virtual network for the managed domain.| | Address space | *-aaddsVnetCIDRAddressSpace* | Virtual network's address range in CIDR notation (if creating the virtual network).|
- | Microsoft Entra DS subnet name | *-aaddsSubnetName* | Name of the subnet of the *aaddsVnetName* virtual network hosting the managed domain. Don't deploy your own VMs and workloads into this subnet. |
- | Microsoft Entra DS address range | *-aaddsSubnetCIDRAddressRange* | Subnet address range in CIDR notation for the Microsoft Entra DS instance, such as *192.168.1.0/24*. Address range must be contained by the address range of the virtual network, and different from other subnets. |
+ | Domain Services subnet name | *-aaddsSubnetName* | Name of the subnet of the *aaddsVnetName* virtual network hosting the managed domain. Don't deploy your own VMs and workloads into this subnet. |
+ | Domain Services address range | *-aaddsSubnetCIDRAddressRange* | Subnet address range in CIDR notation for the Domain Services instance, such as *192.168.1.0/24*. Address range must be contained by the address range of the virtual network, and different from other subnets. |
| Workload subnet name (optional) | *-workloadSubnetName* | Optional name of a subnet in the *aaddsVnetName* virtual network to create for your own application workloads. VMs and applications and also be connected to a peered Azure virtual network instead. | | Workload address range (optional) | *-workloadSubnetCIDRAddressRange* | Optional subnet address range in CIDR notation for application workload, such as *192.168.2.0/24*. Address range must be contained by the address range of the virtual network, and different from other subnets.|
Before you start, make sure you understand the [network considerations and recom
1. In the Microsoft Entra admin center, search for and select **Microsoft Entra Domain Services**. Choose your managed domain, such as *aaddscontoso.com* and wait for the status to report as **Running**.
- When running, [update DNS settings for the Azure virtual network](tutorial-create-instance.md#update-dns-settings-for-the-azure-virtual-network) and then [enable user accounts for Microsoft Entra DS](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) to finalize the configurations for your managed domain.
+ When running, [update DNS settings for the Azure virtual network](tutorial-create-instance.md#update-dns-settings-for-the-azure-virtual-network) and then [enable user accounts for Domain Services](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) to finalize the configurations for your managed domain.
1. Make a note of the DNS addresses shown on the overview page. You need these addresses when you configure the on-premises Active Directory side of the trust relationship in a following section. 1. Restart the management VM for it to receive the new DNS settings, then [join the VM to the managed domain](join-windows-vm.md#join-the-vm-to-the-managed-domain).
Now provide the script the following information:
| Name | Script parameter | Description | |:--|:|:|
-| Microsoft Entra DS domain name | *-ManagedDomainFqdn* | FQDN of the managed domain, such as *aaddscontoso.com* |
+| Domain Services domain name | *-ManagedDomainFqdn* | FQDN of the managed domain, such as *aaddscontoso.com* |
| On-premises AD DS domain name | *-TrustFqdn* | The FQDN of the trusted forest, such as *onprem.contoso.com* | | Trust friendly name | *-TrustFriendlyName* | Friendly name of the trust relationship. | | On-premises AD DS DNS IP addresses | *-TrustDnsIPs* | A comma-delimited list of DNS server IPv4 addresses for the trusted domain listed. |
To configure inbound trust on the on-premises AD DS domain, complete the followi
The following common scenarios let you validate that forest trust correctly authenticates users and access to resources:
-* [On-premises user authentication from the Microsoft Entra DS forest](#on-premises-user-authentication-from-the-azure-ad-ds-forest)
-* [Access resources in the Microsoft Entra DS forest as an on-premises user](#access-resources-in-azure-ad-ds-as-an-on-premises-user)
+* [On-premises user authentication from the Domain Services forest](#on-premises-user-authentication-from-the-azure-ad-ds-forest)
+* [Access resources in the Domain Services forest as an on-premises user](#access-resources-in-azure-ad-ds-as-an-on-premises-user)
* [Enable file and printer sharing](#enable-file-and-printer-sharing) * [Create a security group and add members](#create-a-security-group-and-add-members) * [Create a file share for cross-forest access](#create-a-file-share-for-cross-forest-access)
The following common scenarios let you validate that forest trust correctly auth
<a name='on-premises-user-authentication-from-the-azure-ad-ds-forest'></a>
-### On-premises user authentication from the Microsoft Entra DS forest
+### On-premises user authentication from the Domain Services forest
You should have Windows Server virtual machine joined to the managed domain resource domain. Use this virtual machine to test your on-premises user can authenticate on a virtual machine.
You should have Windows Server virtual machine joined to the managed domain reso
<a name='access-resources-in-azure-ad-ds-as-an-on-premises-user'></a>
-### Access resources in Microsoft Entra DS as an on-premises user
+### Access resources in Domain Services as an on-premises user
Using the Windows Server VM joined to the managed domain, you can test the scenario where users can access resources hosted in the forest when they authenticate from computers in the on-premises domain with users from the on-premises domain. The following examples show you how to create and test various common scenarios.
In this article, you learned how to:
> * Create a one-way inbound forest trust in an on-premises AD DS environment > * Test and validate the trust relationship for authentication and resource access
-For more conceptual information about forest types in Microsoft Entra DS, see [How do forest trusts work in Microsoft Entra DS?][concepts-trust]
+For more conceptual information about forest types in Domain Services, see [How do forest trusts work in Domain Services?][concepts-trust]
<!-- INTERNAL LINKS --> [concepts-trust]: concepts-forest-trust.md
active-directory-domain-services Create Gmsa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md
Previously updated : 01/29/2023 Last updated : 09/23/2023
Applications and services often need an identity to authenticate themselves with other resources. For example, a web service may need to authenticate with a database service. If an application or service has multiple instances, such as a web server farm, manually creating and configuring the identities for those resources gets time consuming.
-Instead, a group managed service account (gMSA) can be created in the Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. The Windows OS automatically manages the credentials for a gMSA, which simplifies the management of large groups of resources.
+Instead, a group managed service account (gMSA) can be created in the Microsoft Entra Domain ServiceS managed domain. The Windows OS automatically manages the credentials for a gMSA, which simplifies the management of large groups of resources.
This article shows you how to create a gMSA in a managed domain using Azure PowerShell.
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
-* A Windows Server management VM that is joined to the Microsoft Entra DS managed domain.
+* A Windows Server management VM that is joined to the Domain Services managed domain.
* If needed, complete the tutorial to [create a management VM][tutorial-create-management-vm]. ## Managed service accounts overview
For more information, see [group managed service accounts (gMSA) overview][gmsa-
<a name='using-service-accounts-in-azure-ad-ds'></a>
-## Using service accounts in Microsoft Entra DS
+## Using service accounts in Domain Services
As managed domains are locked down and managed by Microsoft, there are some considerations when using service accounts:
As managed domains are locked down and managed by Microsoft, there are some cons
* You can't create a service account in the built-in *AADDC Users* or *AADDC Computers* OUs. * Instead, [create a custom OU][create-custom-ou] in the managed domain and then create service accounts in that custom OU. * The Key Distribution Services (KDS) root key is pre-created.
- * The KDS root key is used to generate and retrieve passwords for gMSAs. In Microsoft Entra DS, the KDS root is created for you.
+ * The KDS root key is used to generate and retrieve passwords for gMSAs. In Domain Services, the KDS root is created for you.
* You don't have privileges to create another, or view the default, KDS root key. ## Create a gMSA
-First, create a custom OU using the [New-ADOrganizationalUnit][New-AdOrganizationalUnit] cmdlet. For more information on creating and managing custom OUs, see [Custom OUs in Microsoft Entra DS][create-custom-ou].
+First, create a custom OU using the [New-ADOrganizationalUnit][New-AdOrganizationalUnit] cmdlet. For more information on creating and managing custom OUs, see [Custom OUs in Domain Services][create-custom-ou].
> [!TIP] > To complete these steps to create a gMSA, [use your management VM][tutorial-create-management-vm]. This management VM should already have the required AD PowerShell cmdlets and connection to the managed domain.
active-directory-domain-services Create Ou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-ou.md
Organizational units (OUs) in an Active Directory Domain Services (AD DS) managed domain let you logically group objects such as user accounts, service accounts, or computer accounts. You can then assign administrators to specific OUs, and apply group policy to enforce targeted configuration settings.
-Microsoft Entra DS managed domains include the following two built-in OUs:
+Domain Services managed domains include the following two built-in OUs:
* *AADDC Computers* - contains computer objects for all computers that are joined to the managed domain. * *AADDC Users* - includes users and groups synchronized in from the Microsoft Entra tenant.
-As you create and run workloads that use Microsoft Entra DS, you may need to create service accounts for applications to authenticate themselves. To organize these service accounts, you often create a custom OU in the managed domain and then create service accounts within that OU.
+As you create and run workloads that use Domain Services, you may need to create service accounts for applications to authenticate themselves. To organize these service accounts, you often create a custom OU in the managed domain and then create service accounts within that OU.
In a hybrid environment, OUs created in an on-premises AD DS environment aren't synchronized to the managed domain. Managed domains use a flat OU structure. All user accounts and groups are stored in the *AADDC Users* container, despite being synchronized from different on-premises domains or forests, even if you've configured a hierarchical OU structure there.
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
-* A Windows Server management VM that is joined to the Microsoft Entra DS managed domain.
+* A Windows Server management VM that is joined to the Domain Services managed domain.
* If needed, complete the tutorial to [create a management VM][tutorial-create-management-vm]. * A user account that's a member of the *Microsoft Entra DC administrators* group in your Microsoft Entra tenant.
active-directory-domain-services Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/csp.md
For more information, see the [Azure CSP overview](/partner-center/azure-plan-lp
<a name='benefits-of-using-azure-ad-ds-in-an-azure-csp-subscription'></a>
-## Benefits of using Microsoft Entra DS in an Azure CSP subscription
+## Benefits of using Domain Services in an Azure CSP subscription
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory Domain Services. Over the decades, many applications have been built to work against AD using these capabilities. Many independent software vendors (ISVs) have built and deployed applications at their customers' premises. These applications are hard to support since you often require access to the different environments where the applications are deployed. With Azure CSP subscriptions, you have a simpler alternative with the scale and flexibility of Azure.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory Domain Services. Over the decades, many applications have been built to work against AD using these capabilities. Many independent software vendors (ISVs) have built and deployed applications at their customers' premises. These applications are hard to support since you often require access to the different environments where the applications are deployed. With Azure CSP subscriptions, you have a simpler alternative with the scale and flexibility of Azure.
-Microsoft Entra DS supports Azure CSP subscriptions. You can deploy your application in an Azure CSP subscription tied to your customer's Microsoft Entra tenant. As a result, your employees (support staff) can manage, administer, and service the VMs on which your application is deployed using your organization's corporate credentials.
+Domain Services supports Azure CSP subscriptions. You can deploy your application in an Azure CSP subscription tied to your customer's Microsoft Entra tenant. As a result, your employees (support staff) can manage, administer, and service the VMs on which your application is deployed using your organization's corporate credentials.
-You can also deploy a Microsoft Entra DS managed domain in your customer's Microsoft Entra tenant. Your application is then connected to your customer's managed domain. Capabilities within your application that rely on Kerberos / NTLM, LDAP, or the [System.DirectoryServices API](/dotnet/api/system.directoryservices) work seamlessly against your customer's domain. End customers benefit from consuming your application as a service, without needing to worry about maintaining the infrastructure the application is deployed on.
+You can also deploy a Domain Services managed domain in your customer's Microsoft Entra tenant. Your application is then connected to your customer's managed domain. Capabilities within your application that rely on Kerberos / NTLM, LDAP, or the [System.DirectoryServices API](/dotnet/api/system.directoryservices) work seamlessly against your customer's domain. End customers benefit from consuming your application as a service, without needing to worry about maintaining the infrastructure the application is deployed on.
-All billing for Azure resources you consume in that subscription, including Microsoft Entra DS, is charged back to you. You maintain full control over the relationship with the customer when it comes to sales, billing, technical support etc. With the flexibility of the Azure CSP platform, a small team of support agents can service many such customers who have instances of your application deployed.
+All billing for Azure resources you consume in that subscription, including Domain Services, is charged back to you. You maintain full control over the relationship with the customer when it comes to sales, billing, technical support etc. With the flexibility of the Azure CSP platform, a small team of support agents can service many such customers who have instances of your application deployed.
<a name='csp-deployment-models-for-azure-ad-ds'></a>
-## CSP deployment models for Microsoft Entra DS
+## CSP deployment models for Domain Services
-There are two ways in which you can use Microsoft Entra DS with an Azure CSP subscription. Pick the right one based on the security and simplicity considerations your customers have.
+There are two ways in which you can use Domain Services with an Azure CSP subscription. Pick the right one based on the security and simplicity considerations your customers have.
### Direct deployment model
-In this deployment model, Microsoft Entra DS is enabled within a virtual network that belongs to the Azure CSP subscription. The CSP partner's admin agents have the following privileges:
+In this deployment model, Domain Services is enabled within a virtual network that belongs to the Azure CSP subscription. The CSP partner's admin agents have the following privileges:
* *Global administrator* privileges in the customer's Microsoft Entra tenant. * *Subscription owner* privileges on the Azure CSP subscription.
This deployment model may be suited for smaller organizations that don't have a
### Peered deployment model
-In this deployment model, Microsoft Entra DS is enabled within a virtual network belonging to the customer - a direct Azure subscription paid for by the customer. The CSP partner can deploy applications within a virtual network belonging to the customer's CSP subscription. The virtual networks can then be connected using Azure virtual network peering.
+In this deployment model, Domain Services is enabled within a virtual network belonging to the customer - a direct Azure subscription paid for by the customer. The CSP partner can deploy applications within a virtual network belonging to the customer's CSP subscription. The virtual networks can then be connected using Azure virtual network peering.
With this deployment, the workloads or applications deployed by the CSP partner in the Azure CSP subscription can connect to the customer's managed domain provisioned in the customer's direct Azure subscription.
This deployment model may be suited to scenarios where an ISV provides a hosted
<a name='administer-azure-ad-ds-in-csp-subscriptions'></a>
-## Administer Microsoft Entra DS in CSP subscriptions
+## Administer Domain Services in CSP subscriptions
The following important considerations apply when administering a managed domain in an Azure CSP subscription:
-* **CSP admin agents can provision a managed domain using their credentials:** Microsoft Entra DS supports Azure CSP subscriptions. Users belonging to a CSP partner's admin agents group can provision a new managed domain.
+* **CSP admin agents can provision a managed domain using their credentials:** Domain Services supports Azure CSP subscriptions. Users belonging to a CSP partner's admin agents group can provision a new managed domain.
-* **CSPs can script creation of new managed domains for their customers using PowerShell:** See [how to enable Microsoft Entra DS using PowerShell](powershell-create-instance.md) for details.
+* **CSPs can script creation of new managed domains for their customers using PowerShell:** See [how to enable Domain Services using PowerShell](powershell-create-instance.md) for details.
-* **CSP admin agents can't perform ongoing management tasks on the managed domain using their credentials:** CSP admin users can't perform routine management tasks within the managed domain using their credentials. These users are external to the customer's Microsoft Entra tenant and their credentials aren't available within the customer's Microsoft Entra tenant. Microsoft Entra DS doesn't have access to the Kerberos and NTLM password hashes for these users, so users can't be authenticated on managed domains.
+* **CSP admin agents can't perform ongoing management tasks on the managed domain using their credentials:** CSP admin users can't perform routine management tasks within the managed domain using their credentials. These users are external to the customer's Microsoft Entra tenant and their credentials aren't available within the customer's Microsoft Entra tenant. Domain Services doesn't have access to the Kerberos and NTLM password hashes for these users, so users can't be authenticated on managed domains.
> [!WARNING] > You must create a user account within the customer's directory to perform ongoing administration tasks on the managed domain.
active-directory-domain-services Delete Aadds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/delete-aadds.md
# Delete a Microsoft Entra Domain Services managed domain
-If you no longer need a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, you can delete it. There's no option to turn off or temporarily disable a Microsoft Entra DS managed domain. Deleting the managed domain doesn't delete or otherwise adversely impact the Microsoft Entra tenant.
+If you no longer need a Microsoft Entra Domain Services managed domain, you can delete it. There's no way to turn off or temporarily disable a Domain Services managed domain. Deleting the managed domain doesn't delete or have any other impact on the Microsoft Entra tenant.
This article shows you how to use the Microsoft Entra admin center to delete a managed domain.
This article shows you how to use the Microsoft Entra admin center to delete a m
> **Deletion is permanent and can't be reversed.** > > When you delete a managed domain, the following steps occur:
-> * Domain controllers for the managed domain are de-provisioned and removed from the virtual network.
+> * Domain controllers for the managed domain are deprovisioned and removed from the virtual network.
> * Data on the managed domain is deleted permanently. This data includes custom OUs, GPOs, custom DNS records, service principals, GMSAs, etc. that you created. > * Machines joined to the managed domain lose their trust relationship with the domain and need to be unjoined from the domain. > * You can't sign in to these machines using corporate AD credentials. Instead, you must use the local administrator credentials for the machine.
It can take 15-20 minutes or more to delete the managed domain.
## Next steps
-Consider [sharing feedback][feedback] for the features that you would like to see in Microsoft Entra DS.
+Consider [sharing feedback][feedback] for the features that you would like to see in Domain Services.
-If you want to get started with Microsoft Entra DS again, see [Create and configure a Microsoft Entra Domain Services managed domain][create-instance].
+If you want to get started with Domain Services again, see [Create and configure a Microsoft Entra Domain Services managed domain][create-instance].
<!-- INTERNAL LINKS --> [feedback]: https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789?c=5d63b5b7-ae25-ec11-b6e6-000d3a4f0789
active-directory-domain-services Deploy Azure App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md
# Deploy Microsoft Entra application proxy for secure access to internal applications in a Microsoft Entra Domain Services managed domain
-With Microsoft Entra Domain Services (Microsoft Entra DS), you can lift-and-shift legacy applications running on-premises into Azure. Microsoft Entra application proxy then helps you support remote workers by securely publishing those internal applications part of a Microsoft Entra DS managed domain so they can be accessed over the internet.
+With Microsoft Entra Domain Services, you can lift-and-shift legacy applications running on-premises into Azure. Microsoft Entra application proxy then helps you support remote workers by securely publishing those internal applications part of a Domain Services managed domain so they can be accessed over the internet.
If you're new to the Microsoft Entra application proxy and want to learn more, see [How to provide secure remote access to internal applications](../active-directory/app-proxy/application-proxy.md).
If you deploy multiple Microsoft Entra application proxy connectors, you must co
## Next steps
-With the Microsoft Entra application proxy integrated with Microsoft Entra DS, publish applications for users to access. For more information, see [publish applications using Microsoft Entra application proxy](../active-directory/app-proxy/application-proxy-add-on-premises-application.md).
+With the Microsoft Entra application proxy integrated with Domain Services, publish applications for users to access. For more information, see [publish applications using Microsoft Entra application proxy](../active-directory/app-proxy/application-proxy-add-on-premises-application.md).
<!-- INTERNAL LINKS --> [create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
active-directory-domain-services Deploy Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-kcd.md
Previously updated : 01/29/2023 Last updated : 09/23/2023
As you run applications, there may be a need for those applications to access resources in the context of a different user. Active Directory Domain Services (AD DS) supports a mechanism called *Kerberos delegation* that enables this use-case. Kerberos *constrained* delegation (KCD) then builds on this mechanism to define specific resources that can be accessed in the context of the user.
-Microsoft Entra Domain Services (Microsoft Entra DS) managed domains are more securely locked down than traditional on-premises AD DS environments, so use a more secure *resource-based* KCD.
+Microsoft Entra Domain Services managed domains are more securely locked down than traditional on-premises AD DS environments, so use a more secure *resource-based* KCD.
-This article shows you how to configure resource-based Kerberos constrained delegation in a Microsoft Entra DS managed domain.
+This article shows you how to configure resource-based Kerberos constrained delegation in a Domain Services managed domain.
## Prerequisites
To complete this article, you need the following resources:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
-* A Windows Server management VM that is joined to the Microsoft Entra DS managed domain.
+* A Windows Server management VM that is joined to the Domain Services managed domain.
* If needed, complete the tutorial to [create a Windows Server VM and join it to a managed domain][create-join-windows-vm] then [install the AD DS management tools][tutorial-create-management-vm]. * A user account that's a member of the *Microsoft Entra DC administrators* group in your Microsoft Entra tenant.
In this scenario, let's assume you have a web app that runs as a service account
1. Create the service account (for example, *appsvc*) used to run the web app within the custom OU. > [!NOTE]
- > Again, the computer account for the web API VM, and the service account for the web app, must be in a custom OU where you have permissions to configure resource-based KCD. You can't configure resource-based KCD for accounts in the built-in *Microsoft Entra DC Computers* or *Microsoft Entra DC Users* containers. This also means that you can't use user accounts synchronized from Microsoft Entra ID to set up resource-based KCD. You must create and use service accounts specifically created in Microsoft Entra DS.
+ > Again, the computer account for the web API VM, and the service account for the web app, must be in a custom OU where you have permissions to configure resource-based KCD. You can't configure resource-based KCD for accounts in the built-in *Microsoft Entra DC Computers* or *Microsoft Entra DC Users* containers. This also means that you can't use user accounts synchronized from Microsoft Entra ID to set up resource-based KCD. You must create and use service accounts specifically created in Domain Services.
1. Finally, configure resource-based KCD using the [Set-ADUser][Set-ADUser] PowerShell cmdlet.
active-directory-domain-services Deploy Sp Profile Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-sp-profile-sync.md
Title: Enable SharePoint User Profile service with Microsoft Entra DS | Microsoft Docs
+ Title: Enable SharePoint User Profile service with Domain Services | Microsoft Docs
description: Learn how to configure a Microsoft Entra Domain Services managed domain to support profile synchronization for SharePoint Server
# Configure Microsoft Entra Domain Services to support user profile synchronization for SharePoint Server
-SharePoint Server includes a service to synchronize user profiles. This feature allows user profiles to be stored in a central location and accessible across multiple SharePoint sites and farms. To configure the SharePoint Server user profile service, the appropriate permissions must be granted in a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. For more information, see [user profile synchronization in SharePoint Server](/SharePoint/administration/user-profile-service-administration).
+SharePoint Server includes a service to synchronize user profiles. This feature allows user profiles to be stored in a central location and accessible across multiple SharePoint sites and farms. To configure the SharePoint Server user profile service, the appropriate permissions must be granted in a Microsoft Entra Domain Services managed domain. For more information, see [user profile synchronization in SharePoint Server](/SharePoint/administration/user-profile-service-administration).
-This article shows you how to configure Microsoft Entra DS to allow the SharePoint Server user profile sync service.
+This article shows you how to configure Domain Services to allow the SharePoint Server user profile sync service.
## Before you begin
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
-* A Windows Server management VM that is joined to the Microsoft Entra DS managed domain.
+* A Windows Server management VM that is joined to the Domain Services managed domain.
* If needed, complete the tutorial to [create a management VM][tutorial-create-management-vm]. * A user account that's a member of the *Microsoft Entra DC administrators* group in your Microsoft Entra tenant. * The SharePoint service account name for the user profile synchronization service. For more information about the *Profile Synchronization account*, see [Plan for administrative and service accounts in SharePoint Server][sharepoint-service-account]. To get the *Profile Synchronization account* name from the SharePoint Central Administration website, click **Application Management** > **Manage service applications** > **User Profile service application**. For more information, see [Configure profile synchronization by using SharePoint Active Directory Import in SharePoint Server](/SharePoint/administration/configure-profile-synchronization-by-using-sharepoint-active-directory-import).
When added to this security group, the service account for SharePoint Server use
The service account for SharePoint Server needs adequate privileges to replicate changes to the directory and let SharePoint Server user profile sync work correctly. To provide these privileges, add the service account used for SharePoint user profile synchronization to the *Microsoft Entra DC Service Accounts* group.
-From your Microsoft Entra DS management VM, complete the following steps:
+From your Domain Services management VM, complete the following steps:
> [!NOTE] > To edit group membership in a managed domain, you must be signed in to a user account that's a member of the *AAD DC Administrators* group.
active-directory-domain-services Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/feature-availability.md
Title: Microsoft Entra Domain Services (Microsoft Entra DS) feature availability in Azure Government
-description: Learn which Microsoft Entra DS features are available in Azure Government.
+ Title: Microsoft Entra Domain Services feature availability in Azure Government
+description: Learn which Domain Services features are available in Azure Government.
<!Jeremy said there are additional features that don't fit nicely in this list that we need to add later>
-This following table lists Microsoft Entra Domain Services (Microsoft Entra DS) feature availability in Azure Government.
+This following table lists Microsoft Entra Domain Services feature availability in Azure Government.
| Feature | Availability |
active-directory-domain-services Fleet Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/fleet-metrics.md
Title: Check fleet metrics of Microsoft Entra Domain Services | Microsoft Docs
-description: Learn how to check fleet metrics of a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain.
+description: Learn how to check fleet metrics of a Microsoft Entra Domain Services managed domain.
Previously updated : 01/29/2023 Last updated : 09/23/2023 # Check fleet metrics of Microsoft Entra Domain Services
-Administrators can use Azure Monitor Metrics to configure a scope for Microsoft Entra Domain Services (Microsoft Entra DS) and gain insights into how the service is performing.
-You can access Microsoft Entra DS metrics from two places:
+Administrators can use Azure Monitor Metrics to configure a scope for Microsoft Entra Domain Services and gain insights into how the service is performing.
+You can access Domain Services metrics from two places:
-- In Azure Monitor Metrics, click **New chart** > **Select a scope** and select the Microsoft Entra DS instance:
+- In Azure Monitor Metrics, click **New chart** > **Select a scope** and select the Domain Services instance:
- :::image type="content" border="true" source="media/fleet-metrics/select.png" alt-text="Screenshot of how to select Microsoft Entra DS for fleet metrics.":::
+ :::image type="content" border="true" source="media/fleet-metrics/select.png" alt-text="Screenshot of how to select Domain Services for fleet metrics.":::
-- In Microsoft Entra DS, under **Monitoring**, click **Metrics**:
+- In Domain Services, under **Monitoring**, click **Metrics**:
- :::image type="content" border="true" source="media/fleet-metrics/metrics-scope.png" alt-text="Screenshot of how to select Microsoft Entra DS as scope in Azure Monitor Metrics.":::
+ :::image type="content" border="true" source="media/fleet-metrics/metrics-scope.png" alt-text="Screenshot of how to select Domain Services as scope in Azure Monitor Metrics.":::
The following screenshot shows how to select combined metrics for Total Processor Time and LDAP searches: :::image type="content" border="true" source="media/fleet-metrics/combined-metrics.png" alt-text="Screenshot of combined metrics in Azure Monitor Metrics.":::
- You can also view metrics for a fleet of Microsoft Entra DS instances:
+ You can also view metrics for a fleet of Domain Services instances:
- :::image type="content" border="true" source="media/fleet-metrics/metrics-instance.png" alt-text="Screenshot of how to select a Microsoft Entra DS instance as the scope for fleet metrics.":::
+ :::image type="content" border="true" source="media/fleet-metrics/metrics-instance.png" alt-text="Screenshot of how to select a Domain Services instance as the scope for fleet metrics.":::
The following screenshot shows combined metrics for Total Processor Time, DNS Queries, and LDAP searches by role instance:
- :::image type="content" border="true" source="media/fleet-metrics/combined-metrics-instance.png" alt-text="Screenshot of combined metrics for a Microsoft Entra DS instance.":::
+ :::image type="content" border="true" source="media/fleet-metrics/combined-metrics-instance.png" alt-text="Screenshot of combined metrics for a Domain Services instance.":::
## Metrics definitions and descriptions
You can select a metric for more details about the data collection.
:::image type="content" border="true" source="media/fleet-metrics/descriptions.png" alt-text="Screenshot of fleet metric descriptions.":::
-The following table describes the metrics that are available for Microsoft Entra DS.
+The following table describes the metrics that are available for Domain Services.
| Metric | Description | |--|-|
The following table describes the metrics that are available for Microsoft Entra
## Azure Monitor alert
-You can configure metric alerts for Microsoft Entra DS to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](../azure-monitor/alerts/alerts-overview.md).
+You can configure metric alerts for Domain Services to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](../azure-monitor/alerts/alerts-overview.md).
To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](../azure-monitor/roles-permissions-security.md).
-In Azure Monitor or Microsoft Entra DS Metrics, click **New alert** and configure a Microsoft Entra DS instance as the scope. Then choose the metrics you want to measure from the list of available signals:
+In Azure Monitor or Domain Services Metrics, click **New alert** and configure a Domain Services instance as the scope. Then choose the metrics you want to measure from the list of available signals:
:::image type="content" border="true" source="media/fleet-metrics/available-alerts.png" alt-text="Screenshot of available alerts.":::
active-directory-domain-services How To Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md
Title: Instructions for data retrieval from Microsoft Entra Domain Services | Microsoft Docs
-description: Learn how to retrieve data from Microsoft Entra Domain Services (Microsoft Entra DS).
+description: Learn how to retrieve data from Microsoft Entra Domain Services.
-# Microsoft Entra DS instructions for data retrieval
+# Microsoft Entra Domain Services instructions for data retrieval
-This document describes how to retrieve data from Microsoft Entra Domain Services (Microsoft Entra DS).
+This document describes how to retrieve data from Microsoft Entra Domain Services.
[!INCLUDE [active-directory-app-provisioning.md](../../includes/gdpr-intro-sentence.md)]
active-directory-domain-services Join Centos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md
Previously updated : 06/17/2021 Last updated : 09/23/2023 # Join a CentOS Linux virtual machine to a Microsoft Entra Domain Services managed domain
-To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. When you join a VM to a Microsoft Entra DS managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
+To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services managed domain. When you join a VM to a Domain Services managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
This article shows you how to join a CentOS Linux VM to a managed domain.
active-directory-domain-services Join Coreos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-coreos-linux-vm.md
Previously updated : 07/13/2020 Last updated : 09/23/2023 # Join a CoreOS virtual machine to a Microsoft Entra Domain Services managed domain
-To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. When you join a VM to a Microsoft Entra DS managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
+To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services managed domain. When you join a VM to a Domain Services managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
This article shows you how to join a CoreOS VM to a managed domain.
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md
Previously updated : 07/13/2020 Last updated : 09/23/2023 # Join a Red Hat Enterprise Linux virtual machine to a Microsoft Entra Domain Services managed domain
-To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. When you join a VM to a Microsoft Entra DS managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
+To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services managed domain. When you join a VM to a Domain Services managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
This article shows you how to join a Red Hat Enterprise Linux (RHEL) VM to a managed domain.
active-directory-domain-services Join Suse Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md
Previously updated : 01/29/2023 Last updated : 09/23/2023 # Join a SUSE Linux Enterprise virtual machine to a Microsoft Entra Domain Services managed domain
-To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. When you join a VM to a Microsoft Entra DS managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
+To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services managed domain. When you join a VM to a Domain Services managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
This article shows you how to join a SUSE Linux Enterprise (SLE) VM to a managed domain.
active-directory-domain-services Join Ubuntu Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md
Previously updated : 01/29/2023 Last updated : 09/23/2023 # Join an Ubuntu Linux virtual machine to a Microsoft Entra Domain Services managed domain
-To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain. When you join a VM to a Microsoft Entra DS managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
+To let users sign in to virtual machines (VMs) in Azure using a single set of credentials, you can join VMs to a Microsoft Entra Domain Services managed domain. When you join a VM to a Domain Services managed domain, user accounts and credentials from the domain can be used to sign in and manage servers. Group memberships from the managed domain are also applied to let you control access to files or services on the VM.
This article shows you how to join an Ubuntu Linux VM to a managed domain.
rdns=false
## Update the SSSD configuration
-One of the packages installed in a previous step was for System Security Services Daemon (SSSD). When a user tries to sign in to a VM using domain credentials, SSSD relays the request to an authentication provider. In this scenario, SSSD uses Microsoft Entra DS to authenticate the request.
+One of the packages installed in a previous step was for System Security Services Daemon (SSSD). When a user tries to sign in to a VM using domain credentials, SSSD relays the request to an authentication provider. In this scenario, SSSD uses Domain Services to authenticate the request.
1. Open the *sssd.conf* file with an editor:
active-directory-domain-services Join Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm-template.md
Title: Use a template to join a Windows VM to Microsoft Entra DS | Microsoft Docs
+ Title: Use a template to join a Windows VM to Microsoft Entra Domain Services | Microsoft Docs
description: Learn how to use Azure Resource Manager templates to join a new or existing Windows Server VM to a Microsoft Entra Domain Services managed domain.
Previously updated : 08/01/2023 Last updated : 09/23/2023 # Join a Windows Server virtual machine to a Microsoft Entra Domain Services managed domain using a Resource Manager template
-To automate the deployment and configuration of Azure virtual machines (VMs), you can use a Resource Manager template. These templates let you create consistent deployments each time. Extensions can also be included in templates to automatically configure a VM as part of the deployment. One useful extension joins VMs to a domain, which can be used with Microsoft Entra Domain Services (Microsoft Entra DS) managed domains.
+To automate the deployment and configuration of Azure virtual machines (VMs), you can use a Resource Manager template. These templates let you create consistent deployments each time. Extensions can also be included in templates to automatically configure a VM as part of the deployment. One useful extension joins VMs to a domain, which can be used with Microsoft Entra Domain Services managed domains.
-This article shows you how to create and join a Windows Server VM to a Microsoft Entra DS managed domain using Resource Manager templates. You also learn how to join an existing Windows Server VM to a Microsoft Entra DS domain.
+This article shows you how to create and join a Windows Server VM to a Domain Services managed domain using Resource Manager templates. You also learn how to join an existing Windows Server VM to a Domain Services domain.
## Prerequisites
active-directory-domain-services Join Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm.md
# Tutorial: Join a Windows Server virtual machine to a Microsoft Entra Domain Services managed domain
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. With a Microsoft Entra DS managed domain, you can provide domain join features and management to virtual machines (VMs) in Azure. This tutorial shows you how to create a Windows Server VM then join it to a managed domain.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. With a Domain Services managed domain, you can provide domain join features and management to virtual machines (VMs) in Azure. This tutorial shows you how to create a Windows Server VM then join it to a managed domain.
In this tutorial, you learn how to:
To complete this tutorial, you need the following resources:
* If needed, [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance]. * A user account that's a part of the managed domain. * Make sure that Microsoft Entra Connect password hash synchronization or self-service password reset has been performed so the account is able to sign in to managed domain.
-* An Azure Bastion host deployed in your Microsoft Entra DS virtual network.
+* An Azure Bastion host deployed in your Domain Services virtual network.
* If needed, [create an Azure Bastion host][azure-bastion]. If you already have a VM that you want to domain-join, skip to the section to [join the VM to the managed domain](#join-the-vm-to-the-managed-domain).
In the next tutorial, you use this Windows Server VM to install the management t
To remove the VM from the managed domain, follow through the steps again to [join the VM to a domain](#join-the-vm-to-the-managed-domain). Instead of joining the managed domain, choose to join a workgroup, such as the default *WORKGROUP*. After the VM has rebooted, the computer object is removed from the managed domain.
-If you [delete the VM](#delete-the-vm) without unjoining from the domain, an orphaned computer object is left in Microsoft Entra DS.
+If you [delete the VM](#delete-the-vm) without unjoining from the domain, an orphaned computer object is left in Domain Services.
### Delete the VM
If you don't receive a prompt that asks for credentials to join the domain, ther
After trying each of these troubleshooting steps, try to join the Windows Server VM to the managed domain again.
-* Verify the VM is connected to the same virtual network that Microsoft Entra DS is enabled in, or has a peered network connection.
+* Verify the VM is connected to the same virtual network that Domain Services is enabled in, or has a peered network connection.
* Try to ping the DNS domain name of the managed domain, such as `ping aaddscontoso.com`. * If the ping request fails, try to ping the IP addresses for the managed domain, such as `ping 10.0.0.4`. The IP address for your environment is displayed on the *Properties* page when you select the managed domain from your list of Azure resources. * If you can ping the IP address but not the domain, DNS may be incorrectly configured. Confirm that the IP addresses of the managed domain are configured as DNS servers for the virtual network.
After trying each of these troubleshooting steps, try to join the Windows Server
* Confirm that the account is part of the managed domain or Microsoft Entra tenant. Accounts from external directories associated with your Microsoft Entra tenant can't correctly authenticate during the domain-join process. * Try using the UPN format to specify credentials, such as `contosoadmin@aaddscontoso.onmicrosoft.com`. If there are many users with the same UPN prefix in your tenant or if your UPN prefix is overly long, the *SAMAccountName* for your account may be autogenerated. In these cases, the *SAMAccountName* format for your account may be different from what you expect or use in your on-premises domain. * Check that you have [enabled password synchronization][password-sync] to your managed domain. Without this configuration step, the required password hashes won't be present in the managed domain to correctly authenticate your sign-in attempt.
-* Wait for password synchronization to be completed. When a user account's password is changed, an automatic background synchronization from Microsoft Entra ID updates the password in Microsoft Entra DS. It takes some time for the password to be available for domain-join use.
+* Wait for password synchronization to be completed. When a user account's password is changed, an automatic background synchronization from Microsoft Entra ID updates the password in Domain Services. It takes some time for the password to be available for domain-join use.
## Next steps
active-directory-domain-services Manage Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-dns.md
# Administer DNS and create conditional forwarders in a Microsoft Entra Domain Services managed domain
-Microsoft Entra DS includes a Domain Name System (DNS) server that provides name resolution for the managed domain. This DNS server includes built-in DNS records and updates for the key components that allow the service to run.
+Microsoft Entra Domain Services includes a Domain Name System (DNS) server that provides name resolution for the managed domain. This DNS server includes built-in DNS records and updates for the key components that allow the service to run.
-As you run your own applications and services, you may need to create DNS records for machines that aren't joined to the domain, configure virtual IP addresses for load balancers, or set up external DNS forwarders. Users who belong to the *AAD DC Administrators* group are granted DNS administration privileges on the Microsoft Entra DS managed domain and can create and edit custom DNS records.
+As you run your own applications and services, you may need to create DNS records for machines that aren't joined to the domain, configure virtual IP addresses for load balancers, or set up external DNS forwarders. Users who belong to the *AAD DC Administrators* group are granted DNS administration privileges on the Domain Services managed domain and can create and edit custom DNS records.
In a hybrid environment, DNS zones and records configured in other DNS namespaces, such as an on-premises AD DS environment, aren't synchronized to the managed domain. To resolve named resources in other DNS namespaces, create and use conditional forwarders that point to existing DNS servers in your environment.
-This article shows you how to install the DNS Server tools then use the DNS console to manage records and create conditional forwarders in Microsoft Entra DS.
+This article shows you how to install the DNS Server tools then use the DNS console to manage records and create conditional forwarders in Domain Services.
>[!NOTE]
->Creating or changing root hints or server-level DNS forwarders is not supported and will cause issues for the Microsoft Entra DS managed domain.
+>Creating or changing root hints or server-level DNS forwarders is not supported and will cause issues for the Domain Services managed domain.
## Before you begin
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
-* Connectivity from your Microsoft Entra DS virtual network to where your other DNS namespaces are hosted.
+* Connectivity from your Domain Services virtual network to where your other DNS namespaces are hosted.
* This connectivity can be provided with an [Azure ExpressRoute][expressroute] or [Azure VPN Gateway][vpn-gateway] connection. * A Windows Server management VM that is joined to the managed domain. * If needed, complete the tutorial to [create a Windows Server VM and join it to a managed domain][create-join-windows-vm].
With the DNS Server tools installed, you can administer DNS records on the manag
![DNS Console - administer domain](./media/manage-dns/dns-manager.png) > [!WARNING]
-> When you manage records using the DNS Server tools, make sure that you don't delete or modify the built-in DNS records that are used by Microsoft Entra DS. Built-in DNS records include domain DNS records, name server records, and other records used for DC location. If you modify these records, domain services are disrupted on the virtual network.
+> When you manage records using the DNS Server tools, make sure that you don't delete or modify the built-in DNS records that are used by Domain Services. Built-in DNS records include domain DNS records, name server records, and other records used for DC location. If you modify these records, domain services are disrupted on the virtual network.
## Create conditional forwarders
-A Microsoft Entra DS DNS zone should only contain the zone and records for the managed domain itself. Don't create additional zones in the managed domain to resolve named resources in other DNS namespaces. Instead, use conditional forwarders in the managed domain to tell the DNS server where to go in order to resolve addresses for those resources.
+A Domain Services DNS zone should only contain the zone and records for the managed domain itself. Don't create additional zones in the managed domain to resolve named resources in other DNS namespaces. Instead, use conditional forwarders in the managed domain to tell the DNS server where to go in order to resolve addresses for those resources.
A conditional forwarder is a configuration option in a DNS server that lets you define a DNS domain, such as *contoso.com*, to forward queries to. Instead of the local DNS server trying to resolve queries for records in that domain, DNS queries are forwarded to the configured DNS for that domain. This configuration makes sure that the correct DNS records are returned, as you don't create a local a DNS zone with duplicate records in the managed domain to reflect those resources.
active-directory-domain-services Manage Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md
# Administer Group Policy in a Microsoft Entra Domain Services managed domain
-Settings for user and computer objects in Microsoft Entra Domain Services (Microsoft Entra DS) are often managed using Group Policy Objects (GPOs). Microsoft Entra DS includes built-in GPOs for the *AADDC Users* and *AADDC Computers* containers. You can customize these built-in GPOs to configure Group Policy as needed for your environment. Members of the *Microsoft Entra DC administrators* group have Group Policy administration privileges in the Microsoft Entra DS domain, and can also create custom GPOs and organizational units (OUs). For more information on what Group Policy is and how it works, see [Group Policy overview][group-policy-overview].
+Settings for user and computer objects in Microsoft Entra Domain Services are often managed using Group Policy Objects (GPOs). Domain Services includes built-in GPOs for the *AADDC Users* and *AADDC Computers* containers. You can customize these built-in GPOs to configure Group Policy as needed for your environment. Members of the *Microsoft Entra DC administrators* group have Group Policy administration privileges in the Domain Services domain, and can also create custom GPOs and organizational units (OUs). For more information on what Group Policy is and how it works, see [Group Policy overview][group-policy-overview].
-In a hybrid environment, group policies configured in an on-premises AD DS environment aren't synchronized to Microsoft Entra DS. To define configuration settings for users or computers in Microsoft Entra DS, edit one of the default GPOs or create a custom GPO.
+In a hybrid environment, group policies configured in an on-premises AD DS environment aren't synchronized to Domain Services. To define configuration settings for users or computers in Domain Services, edit one of the default GPOs or create a custom GPO.
This article shows you how to install the Group Policy Management tools, then edit the built-in GPOs and create custom GPOs.
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
-* A Windows Server management VM that is joined to the Microsoft Entra DS managed domain.
+* A Windows Server management VM that is joined to the Domain Services managed domain.
* If needed, complete the tutorial to [create a Windows Server VM and join it to a managed domain][create-join-windows-vm]. * A user account that's a member of the *Microsoft Entra DC administrators* group in your Microsoft Entra tenant.
There are two built-in Group Policy Objects (GPOs) in a managed domain - one for
## Create a custom Group Policy Object
-To group similar policy settings, you often create additional GPOs instead of applying all of the required settings in the single, default GPO. With Microsoft Entra DS, you can create or import your own custom group policy objects and link them to a custom OU. If you need to first create a custom OU, see [create a custom OU in a managed domain](create-ou.md).
+To group similar policy settings, you often create additional GPOs instead of applying all of the required settings in the single, default GPO. With Domain Services, you can create or import your own custom group policy objects and link them to a custom OU. If you need to first create a custom OU, see [create a custom OU in a managed domain](create-ou.md).
1. In the **Group Policy Management** console, select your custom organizational unit (OU), such as *MyCustomOU*. Right-select the OU and choose **Create a GPO in this domain, and Link it here...**:
active-directory-domain-services Mismatched Tenant Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/mismatched-tenant-error.md
Previously updated : 01/29/2023 Last updated : 09/23/2023 # Resolve mismatched directory errors for existing Microsoft Entra Domain Services managed domains
-If a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain shows a mismatched tenant error, you can't administer the managed domain until resolved. This error occurs if the underlying Azure virtual network is moved to a different Microsoft Entra directory.
+If a Microsoft Entra Domain Services managed domain shows a mismatched tenant error, you can't administer the managed domain until resolved. This error occurs if the underlying Azure virtual network is moved to a different Microsoft Entra directory.
This article explains why the error occurs and how to resolve it. ## What causes this error?
-A mismatched directory error happens when a Microsoft Entra DS managed domain and virtual network belong to two different Microsoft Entra tenants. For example, you may have a managed domain called *aaddscontoso.com* that runs in Contoso's Microsoft Entra tenant. However, the Azure virtual network for managed domain is part of the Fabrikam Microsoft Entra tenant.
+A mismatched directory error happens when a Domain Services managed domain and virtual network belong to two different Microsoft Entra tenants. For example, you may have a managed domain called *aaddscontoso.com* that runs in Contoso's Microsoft Entra tenant. However, the Azure virtual network for managed domain is part of the Fabrikam Microsoft Entra tenant.
-Azure role-based access control (Azure RBAC) is used to limit access to resources. When you enable Microsoft Entra DS in a Microsoft Entra tenant, credential hashes are synchronized to the managed domain. This operation requires you to be a tenant admin for the Microsoft Entra directory, and access to the credentials must be controlled.
+Azure role-based access control (Azure RBAC) is used to limit access to resources. When you enable Domain Services in a Microsoft Entra tenant, credential hashes are synchronized to the managed domain. This operation requires you to be a tenant admin for the Microsoft Entra directory, and access to the credentials must be controlled.
To deploy resources to an Azure virtual network and control traffic, you must have administrative privileges on the virtual network in which you deploy the managed domain.
-For Azure RBAC to work consistently and secure access to all the resources Microsoft Entra DS uses, the managed domain and the virtual network must belong to the same Microsoft Entra tenant.
+For Azure RBAC to work consistently and secure access to all the resources Domain Services uses, the managed domain and the virtual network must belong to the same Microsoft Entra tenant.
The following rules apply for deployments:
In the following example deployment scenario, the Contoso managed domain is enab
Both the managed domain and the virtual network belong to the same Microsoft Entra tenant. This example configuration is valid and fully supported.
-![Valid Microsoft Entra DS tenant configuration with the managed domain and virtual network part of the same Microsoft Entra tenant](./media/getting-started/valid-tenant-config.png)
+![Valid Domain Services tenant configuration with the managed domain and virtual network part of the same Microsoft Entra tenant](./media/getting-started/valid-tenant-config.png)
### Mismatched tenant configuration
The following two options resolve the mismatched directory error:
## Next steps
-For more information on troubleshooting issues with Microsoft Entra DS, see the [troubleshooting guide](troubleshoot.md).
+For more information on troubleshooting issues with Domain Services, see the [troubleshooting guide](troubleshoot.md).
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
# Virtual network design considerations and configuration options for Microsoft Entra Domain Services
-Microsoft Entra Domain Services (Microsoft Entra DS) provides authentication and management services to other applications and workloads. Network connectivity is a key component. Without correctly configured virtual network resources, applications and workloads can't communicate with and use the features provided by Microsoft Entra DS. Plan your virtual network requirements to make sure that Microsoft Entra DS can serve your applications and workloads as needed.
+Microsoft Entra Domain Services provides authentication and management services to other applications and workloads. Network connectivity is a key component. Without correctly configured virtual network resources, applications and workloads can't communicate with and use the features provided by Domain Services. Plan your virtual network requirements to make sure that Domain Services can serve your applications and workloads as needed.
-This article outlines design considerations and requirements for an Azure virtual network to support Microsoft Entra DS.
+This article outlines design considerations and requirements for an Azure virtual network to support Domain Services.
## Azure virtual network design
-To provide network connectivity and allow applications and services to authenticate against a Microsoft Entra DS managed domain, you use an Azure virtual network and subnet. Ideally, the managed domain should be deployed into its own virtual network.
+To provide network connectivity and allow applications and services to authenticate against a Domain Services managed domain, you use an Azure virtual network and subnet. Ideally, the managed domain should be deployed into its own virtual network.
-You can include a separate application subnet in the same virtual network to host your management VM or light application workloads. A separate virtual network for larger or complex application workloads, peered to the Microsoft Entra DS virtual network, is usually the most appropriate design.
+You can include a separate application subnet in the same virtual network to host your management VM or light application workloads. A separate virtual network for larger or complex application workloads, peered to the Domain Services virtual network, is usually the most appropriate design.
Other designs choices are valid, provided you meet the requirements outlined in the following sections for the virtual network and subnet.
-As you design the virtual network for Microsoft Entra DS, the following considerations apply:
+As you design the virtual network for Domain Services, the following considerations apply:
-* Microsoft Entra DS must be deployed into the same Azure region as your virtual network.
- * At this time, you can only deploy one managed domain per Microsoft Entra tenant. The managed domain is deployed to single region. Make sure that you create or select a virtual network in a [region that supports Microsoft Entra DS](https://azure.microsoft.com/global-infrastructure/services/?products=active-directory-ds&regions=all).
+* Domain Services must be deployed into the same Azure region as your virtual network.
+ * At this time, you can only deploy one managed domain per Microsoft Entra tenant. The managed domain is deployed to single region. Make sure that you create or select a virtual network in a [region that supports Domain Services](https://azure.microsoft.com/global-infrastructure/services/?products=active-directory-ds&regions=all).
* Consider the proximity of other Azure regions and the virtual networks that host your application workloads. * To minimize latency, keep your core applications close to, or in the same region as, the virtual network subnet for your managed domain. You can use virtual network peering or virtual private network (VPN) connections between Azure virtual networks. These connection options are discussed in a following section. * The virtual network can't rely on DNS services other than those services provided by the managed domain.
- * Microsoft Entra DS provides its own DNS service. The virtual network must be configured to use these DNS service addresses. Name resolution for additional namespaces can be accomplished using conditional forwarders.
+ * Domain Services provides its own DNS service. The virtual network must be configured to use these DNS service addresses. Name resolution for additional namespaces can be accomplished using conditional forwarders.
* You can't use custom DNS server settings to direct queries from other DNS servers, including on VMs. Resources in the virtual network must use the DNS service provided by the managed domain. > [!IMPORTANT]
-> You can't move Microsoft Entra DS to a different virtual network after you've enabled the service.
+> You can't move Domain Services to a different virtual network after you've enabled the service.
-A managed domain connects to a subnet in an Azure virtual network. Design this subnet for Microsoft Entra DS with the following considerations:
+A managed domain connects to a subnet in an Azure virtual network. Design this subnet for Domain Services with the following considerations:
* A managed domain must be deployed in its own subnet. Using an existing subnet, gateway subnet, or remote gateways settings in the virtual network peering is unsupported. * A network security group is created during the deployment of a managed domain. This network security group contains the required rules for correct service communication.
The following example diagram outlines a valid design where the managed domain h
<a name='connections-to-the-azure-ad-ds-virtual-network'></a>
-## Connections to the Microsoft Entra DS virtual network
+## Connections to the Domain Services virtual network
As noted in the previous section, you can only create a managed domain in a single virtual network in Azure, and only one managed domain can be created per Microsoft Entra tenant. Based on this architecture, you may need to connect one or more virtual networks that host your application workloads to your managed domain's virtual network.
You can enable name resolution using conditional DNS forwarders on the DNS serve
<a name='network-resources-used-by-azure-ad-ds'></a>
-## Network resources used by Microsoft Entra DS
+## Network resources used by Domain Services
A managed domain creates some networking resources during deployment. These resources are needed for successful operation and management of the managed domain, and shouldn't be manually configured.
-Don't lock the networking resources used by Microsoft Entra DS. If networking resources get locked, they can't be deleted. When domain controllers need to be rebuilt in that case, new networking resources with different IP addresses need to be created.
+Don't lock the networking resources used by Domain Services. If networking resources get locked, they can't be deleted. When domain controllers need to be rebuilt in that case, new networking resources with different IP addresses need to be created.
| Azure resource | Description | |:-|:|
-| Network interface card | Microsoft Entra DS hosts the managed domain on two domain controllers (DCs) that run on Windows Server as Azure VMs. Each VM has a virtual network interface that connects to your virtual network subnet. |
-| Dynamic standard public IP address | Microsoft Entra DS communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](../virtual-network/ip-services/public-ip-addresses.md). |
-| Azure standard load balancer | Microsoft Entra DS uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md) |
-| Network address translation (NAT) rules | Microsoft Entra DS creates and uses two Inbound NAT rules on the load balancer for secure PowerShell remoting. If a Standard SKU load balancer is used, it will have an Outbound NAT Rule too. For the Basic SKU load balancer, no Outbound NAT rule is required. |
+| Network interface card | Domain Services hosts the managed domain on two domain controllers (DCs) that run on Windows Server as Azure VMs. Each VM has a virtual network interface that connects to your virtual network subnet. |
+| Dynamic standard public IP address | Domain Services communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](../virtual-network/ip-services/public-ip-addresses.md). |
+| Azure standard load balancer | Domain Services uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md) |
+| Network address translation (NAT) rules | Domain Services creates and uses two Inbound NAT rules on the load balancer for secure PowerShell remoting. If a Standard SKU load balancer is used, it will have an Outbound NAT Rule too. For the Basic SKU load balancer, no Outbound NAT rule is required. |
| Load balancer rules | When a managed domain is configured for secure LDAP on TCP port 636, three rules are created and used on a load balancer to distribute the traffic. | > [!WARNING]
-> Don't delete or modify any of the network resource created by Microsoft Entra DS, such as manually configuring the load balancer or rules. If you delete or modify any of the network resources, a Microsoft Entra DS service outage may occur.
+> Don't delete or modify any of the network resource created by Domain Services, such as manually configuring the load balancer or rules. If you delete or modify any of the network resources, a Domain Services service outage may occur.
## Network security groups and required ports
The following network security group Inbound rules are required for the managed
Note that the **CorpNetSaw** service tag isn't available by using the Microsoft Entra admin center, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
-Microsoft Entra DS also relies on the Default Security rules AllowVnetInBound and AllowAzureLoadBalancerInBound.
+Domain Services also relies on the Default Security rules AllowVnetInBound and AllowAzureLoadBalancerInBound.
:::image type="content" border="true" source="./media/network-considerations/nsg.png" alt-text="Screenshot of network security group rules."::: The AllowVnetInBound rule allows all traffic within the VNet which allows the DCs to properly communicate and replicate as well as allow domain join and other domain services to domain members. For more information about required ports for Windows, see [Service overview and network port requirements for Windows](/troubleshoot/windows-server/networking/service-overview-and-network-port-requirements).
-The AllowAzureLoadBalancerInBound rule is also required so that the service can properly communicate over the loadbalancer to manage the DCs. This network security group secures Microsoft Entra DS and is required for the managed domain to work correctly. Don't delete this network security group. The load balancer won't work correctly without it.
+The AllowAzureLoadBalancerInBound rule is also required so that the service can properly communicate over the loadbalancer to manage the DCs. This network security group secures Domain Services and is required for the managed domain to work correctly. Don't delete this network security group. The load balancer won't work correctly without it.
If needed, you can [create the required network security group and rules using Azure PowerShell](powershell-create-instance.md#create-a-network-security-group).
Get-AzNetworkSecurityGroup -Name "nsg-name" -ResourceGroupName "resource-group-n
## User-defined routes
-User-defined routes aren't created by default, and aren't needed for Microsoft Entra DS to work correctly. If you're required to use route tables, avoid making any changes to the *0.0.0.0* route. Changes to this route disrupt Microsoft Entra DS and puts the managed domain in an unsupported state.
+User-defined routes aren't created by default, and aren't needed for Domain Services to work correctly. If you're required to use route tables, avoid making any changes to the *0.0.0.0* route. Changes to this route disrupt Domain Services and puts the managed domain in an unsupported state.
You must also route inbound traffic from the IP addresses included in the respective Azure service tags to the managed domain's subnet. For more information on service tags and their associated IP address from, see [Azure IP Ranges and Service Tags - Public Cloud](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
You must also route inbound traffic from the IP addresses included in the respec
## Next steps
-For more information about some of the network resources and connection options used by Microsoft Entra DS, see the following articles:
+For more information about some of the network resources and connection options used by Domain Services, see the following articles:
* [Azure virtual network peering](../virtual-network/virtual-network-peering-overview.md) * [Azure VPN gateways](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md)
active-directory-domain-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/notifications.md
# Configure email notifications for issues in Microsoft Entra Domain Services
-The health of a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain is monitored by the Azure platform. The health status page in the Microsoft Entra admin center shows any alerts for the managed domain. To make sure issues are responded to in a timely manner, email notifications can be configured to report on health alerts as soon as they're detected in the Microsoft Entra DS managed domain.
+The health of a Microsoft Entra Domain Services managed domain is monitored by the Azure platform. The health status page in the Microsoft Entra admin center shows any alerts for the managed domain. To make sure issues are responded to in a timely manner, email notifications can be configured to report on health alerts as soon as they're detected in the Domain Services managed domain.
This article shows you how to configure email notification recipients for a managed domain. ## Email notification overview
-To alert you of issues with a managed domain, you can configure email notifications. These email notifications specify the managed domain that the alert is present on, as well as giving the time of detection and a link to the health page in the Microsoft Entra admin center. You can then follow the provided troubleshooting advice to resolve the issues.
+To alert you of issues with a managed domain, you can configure email notifications. These email notifications specify the managed domain that the alert is present on, give the time of detection, and a link to the health page in the Microsoft Entra admin center. You can then follow the provided troubleshooting advice to resolve the issues.
The following example email notification indicates a critical warning or alert was generated on the managed domain:
The following example email notification indicates a critical warning or alert w
### Why would I receive email notifications?
-Microsoft Entra DS sends email notifications for important updates about the managed domain. These notifications are only for urgent issues that impact the service and should be addressed immediately. Each email notification is triggered by an alert on the managed domain. The alerts also appear in the Microsoft Entra admin center and can be viewed on the [Microsoft Entra DS health page][check-health].
+Domain Services sends email notifications for important updates about the managed domain. These notifications are only for urgent issues that impact the service and should be addressed immediately. Each email notification is triggered by an alert on the managed domain. The alerts also appear in the Microsoft Entra admin center and can be viewed on the [Domain Services health page][check-health].
-Microsoft Entra DS doesn't send emails for advertisement, updates, or sales purposes.
+Domain Services doesn't send emails for advertisement, updates, or sales purposes.
-### When will I receive email notifications?
+### When do I receive email notifications?
-A notification is sent immediately when a [new alert][troubleshoot-alerts] is found on a managed domain. If the alert isn't resolved, additional email notifications are sent as a reminder every four days.
+A notification is sent immediately when a [new alert][troubleshoot-alerts] is found on a managed domain. If the alert isn't resolved, another email notification is sent as a reminder every four days.
### Who should receive the email notifications?
-The list of email recipients for Microsoft Entra DS should be composed of people who are able to administer and make changes to the managed domain. This email list should be thought of as your "first responders" to any alerts and issues.
+The list of email recipients for Domain Services should be composed of people who are able to administer and make changes to the managed domain. This email list should be thought of as your "first responders" to any alerts and issues.
-You can add up to five additional emails recipients for email notifications. If you want more than five recipients for email notifications, create a distribution list and add that to the notification list instead.
+You can add up to five more recipients for email notifications. If you want more than five recipients for email notifications, create a distribution list and add that to the notification list instead.
-You can also choose to have all *Global Administrators* of the Microsoft Entra directory and every member of the *AAD DC Administrators* group receive email notifications. Microsoft Entra DS only sends notification to up to 100 email addresses, including the list of global administrators and AAD DC Administrators.
+You can also choose to have all *Global Administrators* of the Microsoft Entra directory and every member of the *AAD DC Administrators* group receive email notifications. Domain Services only sends notification to up to 100 email addresses, including the list of global administrators and AAD DC Administrators.
## Configure email notifications
-To review the existing email notification recipients or add additional recipients, complete the following steps:
+To review the existing email notification recipients, or add recipients, complete the following steps:
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#authentication-policy-administrator). 1. Search for and select **Microsoft Entra Domain Services**. 1. Select your managed domain, such as *aaddscontoso.com*.
-1. On the left-hand side of the Microsoft Entra DS resource window, select **Notification settings**. The existing recipients for email notifications are shown.
+1. On the left-hand side of the Domain Services resource window, select **Notification settings**. The existing recipients for email notifications are shown.
1. To add an email recipient, enter the email address in the additional recipients table. 1. When done, select **Save** on the top-hand navigation.
To review the existing email notification recipients or add additional recipient
### I received an email notification for an alert but when I logged on to the Microsoft Entra admin center there was no alert. What happened?
-If an alert is resolved, the alert is cleared from the Microsoft Entra admin center. The most likely reason is that someone else who receives email notifications resolved the alert on the managed domain, or it was autoresolved by Azure platform.
+If an alert is resolved, the alert is cleared from the Microsoft Entra admin center. The most likely reason is that someone else who receives email notifications resolved the alert on the managed domain, or it was automatically resolved by Azure platform.
### Why can I not edit the notification settings?
-If you're unable to access the notification settings page in the Microsoft Entra admin center, you don't have the permissions to edit the managed domain. Contact a global administrator to either get permissions to edit Microsoft Entra DS resource or be removed from the recipient list.
+If you're unable to access the notification settings page in the Microsoft Entra admin center, you don't have the permissions to edit the managed domain. Contact a global administrator to either get permissions to edit Domain Services resource or be removed from the recipient list.
### I don't seem to be receiving email notifications even though I provided my email address. Why?
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
-#Customer intent: As an IT administrator or decision maker, I want to understand what Microsoft Entra DS is and how it can benefit my organization.
+#Customer intent: As an IT administrator or decision maker, I want to understand what Domain Services is and how it can benefit my organization.
# What is Microsoft Entra Domain Services?
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.
-A Microsoft Entra DS managed domain lets you run legacy applications in the cloud that can't use modern authentication methods, or where you don't want directory lookups to always go back to an on-premises AD DS environment. You can lift and shift those legacy applications from your on-premises environment into a managed domain, without needing to manage the AD DS environment in the cloud.
+A Domain Services managed domain lets you run legacy applications in the cloud that can't use modern authentication methods, or where you don't want directory lookups to always go back to an on-premises AD DS environment. You can lift and shift those legacy applications from your on-premises environment into a managed domain, without needing to manage the AD DS environment in the cloud.
-Microsoft Entra DS integrates with your existing Microsoft Entra tenant. This integration lets users sign in to services and applications connected to the managed domain using their existing credentials. You can also use existing groups and user accounts to secure access to resources. These features provide a smoother lift-and-shift of on-premises resources to Azure.
+Domain Services integrates with your existing Microsoft Entra tenant. This integration lets users sign in to services and applications connected to the managed domain using their existing credentials. You can also use existing groups and user accounts to secure access to resources. These features provide a smoother lift-and-shift of on-premises resources to Azure.
> [!div class="nextstepaction"]
-> [To get started, create a Microsoft Entra DS managed domain using the Microsoft Entra admin center][tutorial-create]
+> [To get started, create a Domain Services managed domain using the Microsoft Entra admin center][tutorial-create]
-Take a look at our short video to learn more about Microsoft Entra DS.
+Take a look at our short video to learn more about Domain Services.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4LblD] <a name='how-does-azure-ad-ds-work'></a>
-## How does Microsoft Entra DS work?
+## How does Domain Services work?
-When you create a Microsoft Entra DS managed domain, you define a unique namespace. This namespace is the domain name, such as *aaddscontoso.com*. Two Windows Server domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set.
+When you create a Domain Services managed domain, you define a unique namespace. This namespace is the domain name, such as *aaddscontoso.com*. Two Windows Server domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set.
You don't need to manage, configure, or update these DCs. The Azure platform handles the DCs as part of the managed domain, including backups and encryption at rest using Azure Disk Encryption.
In a hybrid environment with an on-premises AD DS environment, [Microsoft Entra
![Synchronization in Microsoft Entra Domain Services with Microsoft Entra ID and on-premises AD DS using AD Connect](./media/active-directory-domain-services-design-guide/sync-topology.png)
-Microsoft Entra DS replicates identity information from Microsoft Entra ID, so it works with Microsoft Entra tenants that are cloud-only, or synchronized with an on-premises AD DS environment. The same set of Microsoft Entra DS features exists for both environments.
+Domain Services replicates identity information from Microsoft Entra ID, so it works with Microsoft Entra tenants that are cloud-only, or synchronized with an on-premises AD DS environment. The same set of Domain Services features exists for both environments.
* If you have an existing on-premises AD DS environment, you can synchronize user account information to provide a consistent identity for users. To learn more, see [How objects and credentials are synchronized in a managed domain][synchronization].
-* For cloud-only environments, you don't need a traditional on-premises AD DS environment to use the centralized identity services of Microsoft Entra DS.
+* For cloud-only environments, you don't need a traditional on-premises AD DS environment to use the centralized identity services of Domain Services.
-You can expand a managed domain to have more than one replica set per Microsoft Entra tenant. Replica sets can be added to any peered virtual network in any Azure region that supports Microsoft Entra DS. Additional replica sets in different Azure regions provide geographical disaster recovery for legacy applications if an Azure region goes offline. For more information, see [Replica sets concepts and features for managed domains][concepts-replica-sets].
+You can expand a managed domain to have more than one replica set per Microsoft Entra tenant. Replica sets can be added to any peered virtual network in any Azure region that supports Domain Services. By adding replica sets in different Azure regions, you can provide geographical disaster recovery for legacy applications if an Azure region goes offline. For more information, see [Replica sets concepts and features for managed domains][concepts-replica-sets].
-Take a look at this video about how Microsoft Entra DS integrates with your applications and workloads to provide identity services in the cloud:
+Take a look at this video about how Domain Services integrates with your applications and workloads to provide identity services in the cloud:
<br /> >[!VIDEO https://www.youtube.com/embed/T1Nd9APNceQ]
-To see Microsoft Entra DS deployment scenarios in action, you can explore the following examples:
+To see Domain Services deployment scenarios in action, you can explore the following examples:
-* [Microsoft Entra DS for hybrid organizations](scenarios.md#azure-ad-ds-for-hybrid-organizations)
-* [Microsoft Entra DS for cloud-only organizations](scenarios.md#azure-ad-ds-for-cloud-only-organizations)
+* [Domain Services for hybrid organizations](scenarios.md#azure-ad-ds-for-hybrid-organizations)
+* [Domain Services for cloud-only organizations](scenarios.md#azure-ad-ds-for-cloud-only-organizations)
<a name='azure-ad-ds-features-and-benefits'></a>
-## Microsoft Entra DS features and benefits
+## Domain Services features and benefits
-To provide identity services to applications and VMs in the cloud, Microsoft Entra DS is fully compatible with a traditional AD DS environment for operations such as domain-join, secure LDAP (LDAPS), Group Policy, DNS management, and LDAP bind and read support. LDAP write support is available for objects created in the managed domain, but not resources synchronized from Microsoft Entra ID.
+To provide identity services to applications and VMs in the cloud, Domain Services is fully compatible with a traditional AD DS environment for operations such as domain-join, secure LDAP (LDAPS), Group Policy, DNS management, and LDAP bind and read support. LDAP write support is available for objects created in the managed domain, but not resources synchronized from Microsoft Entra ID.
-To learn more about your identity options, [compare Microsoft Entra DS with Microsoft Entra ID, AD DS on Azure VMs, and AD DS on-premises][compare].
+To learn more about your identity options, [compare Domain Services with Microsoft Entra ID, AD DS on Azure VMs, and AD DS on-premises][compare].
-The following features of Microsoft Entra DS simplify deployment and management operations:
+The following features of Domain Services simplify deployment and management operations:
-* **Simplified deployment experience:** Microsoft Entra DS is enabled for your Microsoft Entra tenant using a single wizard in the Microsoft Entra admin center.
-* **Integrated with Microsoft Entra ID:** User accounts, group memberships, and credentials are automatically available from your Microsoft Entra tenant. New users, groups, or changes to attributes from your Microsoft Entra tenant or your on-premises AD DS environment are automatically synchronized to Microsoft Entra DS.
- * Accounts in external directories linked to your Microsoft Entra ID aren't available in Microsoft Entra DS. Credentials aren't available for those external directories, so can't be synchronized into a managed domain.
-* **Use your corporate credentials/passwords:** Passwords for users in Microsoft Entra DS are the same as in your Microsoft Entra tenant. Users can use their corporate credentials to domain-join machines, sign in interactively or over remote desktop, and authenticate against the managed domain.
+* **Simplified deployment experience:** Domain Services is enabled for your Microsoft Entra tenant using a single wizard in the Microsoft Entra admin center.
+* **Integrated with Microsoft Entra ID:** User accounts, group memberships, and credentials are automatically available from your Microsoft Entra tenant. New users, groups, or changes to attributes from your Microsoft Entra tenant or your on-premises AD DS environment are automatically synchronized to Domain Services.
+ * Accounts in external directories linked to your Microsoft Entra ID aren't available in Domain Services. Credentials aren't available for those external directories, so can't be synchronized into a managed domain.
+* **Use your corporate credentials/passwords:** Passwords for users in Domain Services are the same as in your Microsoft Entra tenant. Users can use their corporate credentials to domain-join machines, sign in interactively or over remote desktop, and authenticate against the managed domain.
* **NTLM and Kerberos authentication:** With support for NTLM and Kerberos authentication, you can deploy applications that rely on Windows-integrated authentication.
-* **High availability:** Microsoft Entra DS includes multiple domain controllers, which provide high availability for your managed domain. This high availability guarantees service uptime and resilience to failures.
+* **High availability:** Domain Services includes multiple domain controllers, which provide high availability for your managed domain. This high availability guarantees service uptime and resilience to failures.
* In regions that support [Azure Availability Zones][availability-zones], these domain controllers are also distributed across zones for additional resiliency. * [Replica sets][concepts-replica-sets] can also be used to provide geographical disaster recovery for legacy applications if an Azure region goes offline. Some key aspects of a managed domain include the following: * The managed domain is a stand-alone domain. It isn't an extension of an on-premises domain.
- * If needed, you can create one-way outbound forest trusts from Microsoft Entra DS to an on-premises AD DS environment. For more information, see [Forest concepts and features for Microsoft Entra DS][forest-trusts].
+ * If needed, you can create one-way outbound forest trusts from Domain Services to an on-premises AD DS environment. For more information, see [Forest concepts and features for Domain Services][forest-trusts].
* Your IT team doesn't need to manage, patch, or monitor domain controllers for this managed domain. For hybrid environments that run AD DS on-premises, you don't need to manage AD replication to the managed domain. User accounts, group memberships, and credentials from your on-premises directory are synchronized to Microsoft Entra ID via [Microsoft Entra Connect][azure-ad-connect]. These user accounts, group memberships, and credentials are automatically available within the managed domain. ## Next steps
-To learn more about Microsoft Entra DS compares with other identity solutions and how synchronization works, see the following articles:
+To learn more about Domain Services compares with other identity solutions and how synchronization works, see the following articles:
-* [Compare Microsoft Entra DS with Microsoft Entra ID, Active Directory Domain Services on Azure VMs, and Active Directory Domain Services on-premises][compare]
+* [Compare Domain Services with Microsoft Entra ID, Active Directory Domain Services on Azure VMs, and Active Directory Domain Services on-premises][compare]
* [Learn how Microsoft Entra Domain Services synchronizes with your Microsoft Entra directory][synchronization]
-* To learn how to administrator a managed domain, see [management concepts for user accounts, passwords, and administration in Microsoft Entra DS][administration-concepts].
+* To learn how to administrator a managed domain, see [management concepts for user accounts, passwords, and administration in Domain Services][administration-concepts].
To get started, [create a managed domain using the Microsoft Entra admin center][tutorial-create].
active-directory-domain-services Password Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md
Title: Create and use password policies in Microsoft Entra Domain Services | Microsoft Docs
-description: Learn how and why to use fine-grained password policies to secure and control account passwords in a Microsoft Entra DS managed domain.
+description: Learn how and why to use fine-grained password policies to secure and control account passwords in a Domain Services managed domain.
Previously updated : 05/09/2023 Last updated : 09/21/2023 # Password and account lockout policies on Microsoft Entra Domain Services managed domains
-To manage user security in Microsoft Entra Domain Services (Microsoft Entra DS), you can define fine-grained password policies that control account lockout settings or minimum password length and complexity. A default fine grained password policy is created and applied to all users in a Microsoft Entra DS managed domain. To provide granular control and meet specific business or compliance needs, additional policies can be created and applied to specific users or groups.
+To manage user security in Microsoft Entra Domain Services, you can define fine-grained password policies that control account lockout settings or minimum password length and complexity. A default fine grained password policy is created and applied to all users in a Domain Services managed domain. To provide granular control and meet specific business or compliance needs, additional policies can be created and applied to specific users or groups.
-This article shows you how to create and configure a fine-grained password policy in Microsoft Entra DS using the Active Directory Administrative Center.
+This article shows you how to create and configure a fine-grained password policy in Domain Services using the Active Directory Administrative Center.
> [!NOTE] > Password policies are only available for managed domains created using the Resource Manager deployment model.
For more information about password policies and using the Active Directory Admi
Policies are distributed through group association in a managed domain, and any changes you make are applied at the next user sign-in. Changing the policy doesn't unlock a user account that's already locked out.
-Password policies behave a little differently depending on how the user account they're applied to was created. There are two ways a user account can be created in Microsoft Entra DS:
+Password policies behave a little differently depending on how the user account they're applied to was created. There are two ways a user account can be created in Domain
* The user account can be synchronized in from Microsoft Entra ID. This includes cloud-only user accounts created directly in Azure, and hybrid user accounts synchronized from an on-premises AD DS environment using Microsoft Entra Connect.
- * The majority of user accounts in Microsoft Entra DS are created through the synchronization process from Microsoft Entra ID.
+ * The majority of user accounts in Domain Services are created through the synchronization process from Microsoft Entra ID.
* The user account can be manually created in a managed domain, and doesn't exist in Microsoft Entra ID.
-All users, regardless of how they're created, have the following account lockout policies applied by the default password policy in Microsoft Entra DS:
+All users, regardless of how they're created, have the following account lockout policies applied by the default password policy in Domain
* **Account lockout duration:** 30 * **Number of failed logon attempts allowed:** 5
All users, regardless of how they're created, have the following account lockout
With these default settings, user accounts are locked out for 30 minutes if five invalid passwords are used within 2 minutes. Accounts are automatically unlocked after 30 minutes.
-Account lockouts only occur within the managed domain. User accounts are only locked out in Microsoft Entra DS, and only due to failed sign-in attempts against the managed domain. User accounts that were synchronized in from Microsoft Entra ID or on-premises aren't locked out in their source directories, only in Microsoft Entra DS.
+Account lockouts only occur within the managed domain. User accounts are only locked out in Domain Services, and only due to failed sign-in attempts against the managed domain. User accounts that were synchronized in from Microsoft Entra ID or on-premises aren't locked out in their source directories, only in Domain Services.
-If you have a Microsoft Entra password policy that specifies a maximum password age greater than 90 days, that password age is applied to the default policy in Microsoft Entra DS. You can configure a custom password policy to define a different maximum password age in Microsoft Entra DS. Take care if you have a shorter maximum password age configured in a Microsoft Entra DS password policy than in Microsoft Entra ID or an on-premises AD DS environment. In that scenario, a user's password may expire in Microsoft Entra DS before they're prompted to change in Microsoft Entra ID or an on-premises AD DS environment.
+If you have a Microsoft Entra password policy that specifies a maximum password age greater than 90 days, that password age is applied to the default policy in Domain Services. You can configure a custom password policy to define a different maximum password age in Domain Services. Take care if you have a shorter maximum password age configured in a Domain Services password policy than in Microsoft Entra ID or an on-premises AD DS environment. In that scenario, a user's password may expire in Domain Services before they're prompted to change in Microsoft Entra ID or an on-premises AD DS environment.
-For user accounts created manually in a managed domain, the following additional password settings are also applied from the default policy. These settings don't apply to user accounts synchronized in from Microsoft Entra ID, as a user can't update their password directly in Microsoft Entra DS.
+For user accounts created manually in a managed domain, the following additional password settings are also applied from the default policy. These settings don't apply to user accounts synchronized in from Microsoft Entra ID, as a user can't update their password directly in Domain Services.
* **Minimum password length (characters):** 7 * **Passwords must meet complexity requirements**
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Previously updated : 01/29/2023 Last updated : 09/21/2023 # Enable Microsoft Entra Domain Services using PowerShell
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Microsoft Entra DS integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Domain Services integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
-This article shows you how to enable Microsoft Entra DS using PowerShell.
+This article shows you how to enable Domain Services using PowerShell.
[!INCLUDE [updated-for-az.md](../../includes/updated-for-az.md)]
To complete this article, you need the following resources:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need *global administrator* privileges in your Microsoft Entra tenant to enable Microsoft Entra DS.
-* You need *Contributor* privileges in your Azure subscription to create the required Microsoft Entra DS resources.
+* You need *global administrator* privileges in your Microsoft Entra tenant to enable Domain Services.
+* You need *Contributor* privileges in your Azure subscription to create the required Domain Services resources.
> [!IMPORTANT] > While the **Az.ADDomainServices** PowerShell module is in preview, you must install it separately
To complete this article, you need the following resources:
## Create required Microsoft Entra resources
-Microsoft Entra DS requires a service principal to authenticate and communicate and a Microsoft Entra group to define which users have administrative permissions in the managed domain.
+Domain Services requires a service principal to authenticate and communicate and a Microsoft Entra group to define which users have administrative permissions in the managed domain.
First, create a Microsoft Entra service principal by using a specific application ID named *Domain Controller Services*. The ID value is *2565bd9d-da50-47d4-8b85-4c97f669dc36* for global Azure and *6ba9a5d4-8456-4118-b521-9c5ca10cdf84* for other Azure clouds. Don't change this application ID.
New-AzResourceGroup `
-Location $AzureLocation ```
-Create the virtual network and subnets for Microsoft Entra Domain Services. Two subnets are created - one for *DomainServices*, and one for *Workloads*. Microsoft Entra DS is deployed into the dedicated *DomainServices* subnet. Don't deploy other applications or workloads into this subnet. Use the separate *Workloads* or other subnets for the rest of your VMs.
+Create the virtual network and subnets for Microsoft Entra Domain Services. Two subnets are created - one for *DomainServices*, and one for *Workloads*. Domain Services is deployed into the dedicated *DomainServices* subnet. Don't deploy other applications or workloads into this subnet. Use the separate *Workloads* or other subnets for the rest of your VMs.
Create the subnets using the [New-AzVirtualNetworkSubnetConfig][New-AzVirtualNetworkSubnetConfig] cmdlet, then create the virtual network using the [New-AzVirtualNetwork][New-AzVirtualNetwork] cmdlet.
$Vnet= New-AzVirtualNetwork `
### Create a network security group
-Microsoft Entra DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. A [network security group (NSG)][nsg-overview] contains a list of rules that allow or deny network traffic to traffic in an Azure virtual network. In Microsoft Entra DS, the network security group acts as an extra layer of protection to lock down access to the managed domain. To view the ports required, see [Network security groups and required ports][network-ports].
+Domain Services needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. A [network security group (NSG)][nsg-overview] contains a list of rules that allow or deny network traffic to traffic in an Azure virtual network. In Domain Services, the network security group acts as an extra layer of protection to lock down access to the managed domain. To view the ports required, see [Network security groups and required ports][network-ports].
The following PowerShell cmdlets use [New-AzNetworkSecurityRuleConfig][New-AzNetworkSecurityRuleConfig] to create the rules, then [New-AzNetworkSecurityGroup][New-AzNetworkSecurityGroup] to create the network security group. The network security group and rules are then associated with the virtual network subnet using the [Set-AzVirtualNetworkSubnetConfig][Set-AzVirtualNetworkSubnetConfig] cmdlet.
$vnet | Set-AzVirtualNetwork
Now let's create a managed domain. Set your Azure subscription ID, and then provide a name for the managed domain, such as *aaddscontoso.com*. You can get your subscription ID using the [Get-AzSubscription][Get-AzSubscription] cmdlet.
-If you choose a region that supports Availability Zones, the Microsoft Entra DS resources are distributed across zones for redundancy.
+If you choose a region that supports Availability Zones, the Domain Services resources are distributed across zones for redundancy.
Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
-There's nothing for you to configure for Microsoft Entra DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones].
+There's nothing for you to configure for Domain Services to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones].
```azurepowershell-interactive $AzureSubscriptionId = "YOUR_AZURE_SUBSCRIPTION_ID"
When the Microsoft Entra admin center shows that the managed domain has finished
* Update DNS settings for the virtual network so virtual machines can find the managed domain for domain join or authentication. * To configure DNS, select your managed domain in the portal. On the **Overview** window, you are prompted to automatically configure these DNS settings.
-* [Enable password synchronization to Microsoft Entra DS](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) so end users can sign in to the managed domain using their corporate credentials.
+* [Enable password synchronization to Domain Services](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) so end users can sign in to the managed domain using their corporate credentials.
## Complete PowerShell script The following complete PowerShell script combines all of the tasks shown in this article. Copy the script and save it to a file with a `.ps1` extension. For Azure Global, use AppId value *2565bd9d-da50-47d4-8b85-4c97f669dc36*. For other Azure clouds, use AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. Run the script in a local PowerShell console or the [Azure Cloud Shell][cloud-shell]. > [!NOTE]
-> To enable Microsoft Entra DS, you must be a global administrator for the Microsoft Entra tenant. You also need at least *Contributor* privileges in the Azure subscription.
+> To enable Domain Services, you must be a global administrator for the Microsoft Entra tenant. You also need at least *Contributor* privileges in the Azure subscription.
```azurepowershell-interactive # Change the following values to match your deployment.
When the Microsoft Entra admin center shows that the managed domain has finished
* Update DNS settings for the virtual network so virtual machines can find the managed domain for domain join or authentication. * To configure DNS, select your managed domain in the portal. On the **Overview** window, you are prompted to automatically configure these DNS settings.
-* [Enable password synchronization to Microsoft Entra DS](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) so end users can sign in to the managed domain using their corporate credentials.
+* [Enable password synchronization to Domain Services](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) so end users can sign in to the managed domain using their corporate credentials.
## Next steps
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
# Configure scoped synchronization from Microsoft Entra ID to Microsoft Entra Domain Services using Azure AD PowerShell
-To provide authentication services, Microsoft Entra Domain Services (Microsoft Entra DS) synchronizes users and groups from Microsoft Entra ID. In a hybrid environment, users and groups from an on-premises Active Directory Domain Services (AD DS) environment can be first synchronized to Microsoft Entra ID using Microsoft Entra Connect, and then synchronized to Microsoft Entra DS.
+To provide authentication services, Microsoft Entra Domain Services synchronizes users and groups from Microsoft Entra ID. In a hybrid environment, users and groups from an on-premises Active Directory Domain Services (AD DS) environment can be first synchronized to Microsoft Entra ID using Microsoft Entra Connect, and then synchronized to Domain Services.
-By default, all users and groups from a Microsoft Entra directory are synchronized to a Microsoft Entra DS managed domain. If you have specific needs, you can instead choose to synchronize only a defined set of users.
+By default, all users and groups from a Microsoft Entra directory are synchronized to a Domain Services managed domain. If you have specific needs, you can instead choose to synchronize only a defined set of users.
This article shows you how to create a managed domain that uses scoped synchronization and then change or disable the set of scoped users using Azure AD PowerShell. You can also [complete these steps using the Microsoft Entra admin center][scoped-sync].
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Microsoft Entra DS synchronization scope.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope.
## Scoped synchronization overview
To learn more about the synchronization process, see [Understand synchronization
To configure scoped synchronization using PowerShell, first save the following script to a file named `Select-GroupsToSync.ps1`.
-This script configures Microsoft Entra DS to synchronize selected groups from Microsoft Entra ID. All user accounts that are part of the specified groups are synchronized to the managed domain.
+This script configures Domain Services to synchronize selected groups from Microsoft Entra ID. All user accounts that are part of the specified groups are synchronized to the managed domain.
This script is used in the additional steps in this article.
Write-Output "******************************************************************
To enable group-based scoped synchronization for a managed domain, complete the following steps:
-1. First set *"filteredSync" = "Enabled"* on the Microsoft Entra DS resource, then update the managed domain.
+1. First set *"filteredSync" = "Enabled"* on the Domain Services resource, then update the managed domain.
When prompted, specify the credentials for a *global admin* to sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
Changing the scope of synchronization causes the managed domain to resynchronize
## Disable scoped synchronization
-To disable group-based scoped synchronization for a managed domain, set *"filteredSync" = "Disabled"* on the Microsoft Entra DS resource, then update the managed domain. When complete, all users and groups are set to synchronize from Microsoft Entra ID.
+To disable group-based scoped synchronization for a managed domain, set *"filteredSync" = "Disabled"* on the Domain Services resource, then update the managed domain. When complete, all users and groups are set to synchronize from Microsoft Entra ID.
When prompted, specify the credentials for a *global admin* to sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
active-directory-domain-services Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scenarios.md
Previously updated : 01/29/2023 Last updated : 09/23/2023
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
Previously updated : 03/22/2023 Last updated : 09/21/2023 # Configure scoped synchronization from Microsoft Entra ID to Microsoft Entra Domain Services using the Microsoft Entra admin center
-To provide authentication services, Microsoft Entra Domain Services (Microsoft Entra DS) synchronizes users and groups from Microsoft Entra ID. In a hybrid environment, users and groups from an on-premises Active Directory Domain Services (AD DS) environment can be first synchronized to Microsoft Entra ID using Microsoft Entra Connect, and then synchronized to a Microsoft Entra DS managed domain.
+To provide authentication services, Microsoft Entra Domain Services synchronizes users and groups from Microsoft Entra ID. In a hybrid environment, users and groups from an on-premises Active Directory Domain Services (AD DS) environment can be first synchronized to Microsoft Entra ID using Microsoft Entra Connect, and then synchronized to a Domain Services managed domain.
-By default, all users and groups from a Microsoft Entra directory are synchronized to a managed domain. If only some users need to use Microsoft Entra DS, you can instead choose to synchronize only groups of users. You can filter synchronization for groups on-premises, cloud only, or both.
+By default, all users and groups from a Microsoft Entra directory are synchronized to a managed domain. If only some users need to use Domain Services, you can instead choose to synchronize only groups of users. You can filter synchronization for groups on-premises, cloud only, or both.
This article shows you how to configure scoped synchronization and then change or disable the set of scoped users using the Microsoft Entra admin center. You can also [complete these steps using PowerShell][scoped-sync-powershell].
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Microsoft Entra DS synchronization scope.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope.
## Scoped synchronization overview
To enable scoped synchronization in the Microsoft Entra admin center, complete t
1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Microsoft Entra Domain Services**. Choose your managed domain, such as *aaddscontoso.com*. 1. Select **Synchronization** from the menu on the left-hand side. 1. For *Synchronization scope*, select **All** or **Cloud Only**.
-1. To filter synchronization for selected groups, click **Show selected groups**, choose whether to synchronize cloud-only groups, on-premises groups, or both. For example, the following screenshot shows how to synchronize only three groups that were created in Microsoft Entra ID. Only users who belong to those groups will have their accounts synchronized to Microsoft Entra DS.
+1. To filter synchronization for selected groups, click **Show selected groups**, choose whether to synchronize cloud-only groups, on-premises groups, or both. For example, the following screenshot shows how to synchronize only three groups that were created in Microsoft Entra ID. Only users who belong to those groups will have their accounts synchronized to Domain Services.
:::image type="content" source="media/scoped-synchronization/cloud-only-groups.png" alt-text="Screenshot that shows filter by cloud-only groups." :::
active-directory-domain-services Secure Remote Vm Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-remote-vm-access.md
Previously updated : 01/29/2023 Last updated : 09/21/2023 # Secure remote access to virtual machines in Microsoft Entra Domain Services
-To secure remote access to virtual machines (VMs) that run in a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, you can use Remote Desktop Services (RDS) and Network Policy Server (NPS). Microsoft Entra DS authenticates users as they request access through the RDS environment. For enhanced security, you can integrate Microsoft Entra multifactor authentication to provide an additional authentication prompt during sign-in events. Microsoft Entra multifactor authentication uses an extension for NPS to provide this feature.
+To secure remote access to virtual machines (VMs) that run in a Microsoft Entra Domain Services managed domain, you can use Remote Desktop Services (RDS) and Network Policy Server (NPS). Domain Services authenticates users as they request access through the RDS environment. For enhanced security, you can integrate Microsoft Entra multifactor authentication to provide another authentication prompt during sign-in events. Microsoft Entra multifactor authentication uses an extension for NPS to provide this feature.
> [!IMPORTANT]
-> The recommended way to securely connect to your VMs in a Microsoft Entra DS managed domain is using Azure Bastion, a fully platform-managed PaaS service that you provision inside your virtual network. A bastion host provides secure and seamless Remote Desktop Protocol (RDP) connectivity to your VMs directly in the Azure portal over SSL. When you connect via a bastion host, your VMs don't need a public IP address, and you don't need to use network security groups to expose access to RDP on TCP port 3389.
+> The recommended way to securely connect to your VMs in a Domain Services managed domain is using Azure Bastion, a fully platform-managed PaaS service that you provision inside your virtual network. A bastion host provides secure and seamless Remote Desktop Protocol (RDP) connectivity to your VMs directly in the Azure portal over SSL. When you connect via a bastion host, your VMs don't need a public IP address, and you don't need to use network security groups to expose access to RDP on TCP port 3389.
>
-> We strongly recommend that you use Azure Bastion in all regions where it's supported. In regions without Azure Bastion availability, follow the steps detailed in this article until Azure Bastion is available. Take care with assigning public IP addresses to VMs joined to Microsoft Entra DS where all incoming RDP traffic is allowed.
+> We strongly recommend that you use Azure Bastion in all regions where it's supported. In regions without Azure Bastion availability, follow the steps detailed in this article until Azure Bastion is available. Take care with assigning public IP addresses to VMs joined to Domain Services where all incoming RDP traffic is allowed.
> > For more information, see [What is Azure Bastion?][bastion-overview].
-This article shows you how to configure RDS in Microsoft Entra DS and optionally use the Microsoft Entra multifactor authentication NPS extension.
+This article shows you how to configure RDS in Domain Services and optionally use the Microsoft Entra multifactor authentication NPS extension.
![Remote Desktop Services (RDS) overview](./media/enable-network-policy-server/remote-desktop-services-overview.png)
To complete this article, you need the following resources:
## Deploy and configure the Remote Desktop environment
-To get started, create a minimum of two Azure VMs that run Windows Server 2016 or Windows Server 2019. For redundancy and high availability of your Remote Desktop (RD) environment, you can add and load balance additional hosts later.
+To get started, create a minimum of two Azure VMs that run Windows Server 2016 or Windows Server 2019. For redundancy and high availability of your Remote Desktop (RD) environment, you can add and load balance hosts later.
A suggested RDS deployment includes the following two VMs: * *RDGVM01* - Runs the RD Connection Broker server, RD Web Access server, and RD Gateway server. * *RDSHVM01* - Runs the RD Session Host server.
-Make sure that VMs are deployed into a *workloads* subnet of your Microsoft Entra DS virtual network, then join the VMs to managed domain. For more information, see how to [create and join a Windows Server VM to a managed domain][tutorial-create-join-vm].
+Make sure that VMs are deployed into a *workloads* subnet of your Domain Services virtual network, then join the VMs to managed domain. For more information, see how to [create and join a Windows Server VM to a managed domain][tutorial-create-join-vm].
The RD environment deployment contains a number of steps. The existing RD deployment guide can be used without any specific changes to use in a managed domain: 1. Sign in to VMs created for the RD environment with an account that's part of the *Microsoft Entra DC Administrators* group, such as *contosoadmin*. 1. To create and configure RDS, use the existing [Remote Desktop environment deployment guide][deploy-remote-desktop]. Distribute the RD server components across your Azure VMs as desired.
- * Specific to Microsoft Entra DS - when you configure RD licensing, set it to **Per Device** mode, not **Per User** as noted in the deployment guide.
+ * Specific to Domain Services - when you configure RD licensing, set it to **Per Device** mode, not **Per User** as noted in the deployment guide.
1. If you want to provide access using a web browser, [set up the Remote Desktop web client for your users][rd-web-client]. With RD deployed into the managed domain, you can manage and use the service as you would with an on-premises AD DS domain.
With RD deployed into the managed domain, you can manage and use the service as
## Deploy and configure NPS and the Microsoft Entra multifactor authentication NPS extension
-If you want to increase the security of the user sign-in experience, you can optionally integrate the RD environment with Microsoft Entra multifactor authentication. With this configuration, users receive an additional prompt during sign-in to confirm their identity.
+If you want to increase the security of the user sign-in experience, you can optionally integrate the RD environment with Microsoft Entra multifactor authentication. With this configuration, users receive another prompt during sign-in to confirm their identity.
-To provide this capability, an additional Network Policy Server (NPS) is installed in your environment along with the Microsoft Entra multifactor authentication NPS extension. This extension integrates with Microsoft Entra ID to request and return the status of multifactor authentication prompts.
+To provide this capability, a Network Policy Server (NPS) is installed in your environment along with the Microsoft Entra multifactor authentication NPS extension. This extension integrates with Microsoft Entra ID to request and return the status of multifactor authentication prompts.
-Users must be [registered to use Microsoft Entra multifactor authentication][user-mfa-registration], which may require additional Microsoft Entra ID licenses.
+Users must be [registered to use Microsoft Entra multifactor authentication][user-mfa-registration], which may require other Microsoft Entra ID licenses.
-To integrate Microsoft Entra multifactor authentication in to your Microsoft Entra DS Remote Desktop environment, create an NPS Server and install the extension:
+To integrate Microsoft Entra multifactor authentication in to your Remote Desktop environment, create an NPS Server and install the extension:
-1. Create an additional Windows Server 2016 or 2019 VM, such as *NPSVM01*, that's connected to a *workloads* subnet in your Microsoft Entra DS virtual network. Join the VM to the managed domain.
+1. Create another Windows Server 2016 or 2019 VM, such as *NPSVM01*, that's connected to a *workloads* subnet in your Domain Services virtual network. Join the VM to the managed domain.
1. Sign in to NPS VM as account that's part of the *Microsoft Entra DC Administrators* group, such as *contosoadmin*. 1. From **Server Manager**, select **Add Roles and Features**, then install the *Network Policy and Access Services* role. 1. Use the existing how-to article to [install and configure the Microsoft Entra multifactor authentication NPS extension][nps-extension].
With the NPS server and Microsoft Entra multifactor authentication NPS extension
To integrate the Microsoft Entra multifactor authentication NPS extension, use the existing how-to article to [integrate your Remote Desktop Gateway infrastructure using the Network Policy Server (NPS) extension and Microsoft Entra ID][azure-mfa-nps-integration].
-The following additional configuration options are needed to integrate with a managed domain:
+The following configuration options are needed to integrate with a managed domain:
1. Don't [register the NPS server in Active Directory][register-nps-ad]. This step fails in a managed domain. 1. In [step 4 to configure network policy][create-nps-policy], also check the box to **Ignore user account dial-in properties**.
The following additional configuration options are needed to integrate with a ma
sc sidtype IAS unrestricted ```
-Users are now prompted for an additional authentication factor when they sign in, such as a text message or prompt in the Microsoft Authenticator app.
+Users are now prompted for another authentication factor when they sign in, such as a text message or prompt in the Microsoft Authenticator app.
## Next steps
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Previously updated : 01/29/2023 Last updated : 09/23/2023 # Harden a Microsoft Entra Domain Services managed domain
-By default, Microsoft Entra Domain Services (Microsoft Entra DS) enables the use of ciphers such as NTLM v1 and TLS v1. These ciphers may be required for some legacy applications, but are considered weak and can be disabled if you don't need them. If you have on-premises hybrid connectivity using Microsoft Entra Connect, you can also disable the synchronization of NTLM password hashes.
+By default, Microsoft Entra Domain Services enables the use of ciphers such as NTLM v1 and TLS v1. These ciphers may be required for some legacy applications, but are considered weak and can be disabled if you don't need them. If you have on-premises hybrid connectivity using Microsoft Entra Connect, you can also disable the synchronization of NTLM password hashes.
This article shows you how to harden a managed domain by using setting setting such as:
To complete this article, you need the following resources:
In addition to **Security settings**, Microsoft Azure Policy has a **Compliance** setting to enforce TLS 1.2 usage. The policy has no impact until it is assigned. When the policy is assigned, it appears in **Compliance**: -- If the assignment is **Audit**, the compliance will report if the Microsoft Entra DS instance is compliant.-- If the assignment is **Deny**, the compliance will prevent a Microsoft Entra DS instance from being created if TLS 1.2 is not required and prevent any update to a Microsoft Entra DS instance until TLS 1.2 is required.
+- If the assignment is **Audit**, the compliance will report if the Domain Services instance is compliant.
+- If the assignment is **Deny**, the compliance will prevent a Domain Services instance from being created if TLS 1.2 is not required and prevent any update to a Domain Services instance until TLS 1.2 is required.
![Screenshot of Compliance settings](media/secure-your-domain/policy-tls.png)
If needed, [install and configure Azure PowerShell](/powershell/azure/install-az
Also if needed, [install and configure Azure AD PowerShell](/powershell/azure/active-directory/install-adv2). Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-To disable weak cipher suites and NTLM credential hash synchronization, sign in to your Azure account, then get the Microsoft Entra DS resource using the [Get-AzResource][Get-AzResource] cmdlet:
+To disable weak cipher suites and NTLM credential hash synchronization, sign in to your Azure account, then get the Domain Services resource using the [Get-AzResource][Get-AzResource] cmdlet:
> [!TIP] > If you receive an error using the [Get-AzResource][Get-AzResource] command that the *Microsoft.AAD/DomainServices* resource doesn't exist, [elevate your access to manage all Azure subscriptions and management groups][global-admin].
Next, define *DomainSecuritySettings* to configure the following security option
3. Disable TLS v1. > [!IMPORTANT]
-> Users and service accounts can't perform LDAP simple binds if you disable NTLM password hash synchronization in the Microsoft Entra DS managed domain. If you need to perform LDAP simple binds, don't set the *"SyncNtlmPasswords"="Disabled";* security configuration option in the following command.
+> Users and service accounts can't perform LDAP simple binds if you disable NTLM password hash synchronization in the Domain Services managed domain. If you need to perform LDAP simple binds, don't set the *"SyncNtlmPasswords"="Disabled";* security configuration option in the following command.
```powershell $securitySettings = @{"DomainSecuritySettings"=@{"NtlmV1"="Disabled";"SyncNtlmPasswords"="Disabled";"TlsV1"="Disabled";"KerberosRc4Encryption"="Disabled";"KerberosArmoring"="Disabled"}} ```
-Finally, apply the defined security settings to the managed domain using the [Set-AzResource][Set-AzResource] cmdlet. Specify the Microsoft Entra DS resource from the first step, and the security settings from the previous step.
+Finally, apply the defined security settings to the managed domain using the [Set-AzResource][Set-AzResource] cmdlet. Specify the Domain Services resource from the first step, and the security settings from the previous step.
```powershell Set-AzResource -Id $DomainServicesResource.ResourceId -Properties $securitySettings -ApiVersion ΓÇ£2021-03-01ΓÇ¥ -Verbose -Force
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md
# Enable security and DNS audits for Microsoft Entra Domain Services
-Microsoft Entra Domain Services (Microsoft Entra DS) security and DNS audits let Azure stream events to targeted resources. These resources include Azure Storage, Azure Log Analytics workspaces, or Azure Event Hub. After you enable security audit events, Microsoft Entra DS sends all the audited events for the selected category to the targeted resource.
+Microsoft Entra Domain Services security and DNS audits let Azure stream events to targeted resources. These resources include Azure Storage, Azure Log Analytics workspaces, or Azure Event Hub. After you enable security audit events, Domain Services sends all the audited events for the selected category to the targeted resource.
You can archive events into Azure storage and stream events into security information and event management (SIEM) software (or equivalent) using Azure Event Hubs, or do your own analysis and using Azure Log Analytics workspaces from the Microsoft Entra admin center. ## Security audit destinations
-You can use Azure Storage, Azure Event Hubs, or Azure Log Analytics workspaces as a target resource for Microsoft Entra DS security audits. These destinations can be combined. For example, you could use Azure Storage for archiving security audit events, but an Azure Log Analytics workspace to analyze and report on the information in the short term.
+You can use Azure Storage, Azure Event Hubs, or Azure Log Analytics workspaces as a target resource for Domain Services security audits. These destinations can be combined. For example, you could use Azure Storage for archiving security audit events, but an Azure Log Analytics workspace to analyze and report on the information in the short term.
The following table outlines scenarios for each destination resource type. > [!IMPORTANT]
-> You need to create the target resource before you enable Microsoft Entra DS security audits. You can create these resources using the Microsoft Entra admin center, Azure PowerShell, or the Azure CLI.
+> You need to create the target resource before you enable Domain Services security audits. You can create these resources using the Microsoft Entra admin center, Azure PowerShell, or the Azure CLI.
| Target Resource | Scenario | |:|:|
-|Azure Storage| This target should be used when your primary need is to store security audit events for archival purposes. Other targets can be used for archival purposes, however those targets provide capabilities beyond the primary need of archiving. <br /><br />Before you enable Microsoft Entra DS security audit events, first [Create an Azure Storage account](../storage/common/storage-account-create.md).|
-|Azure Event Hubs| This target should be used when your primary need is to share security audit events with additional software such as data analysis software or security information & event management (SIEM) software.<br /><br />Before you enable Microsoft Entra DS security audit events, [Create an event hub using Microsoft Entra admin center](../event-hubs/event-hubs-create.md)|
-|Azure Log Analytics Workspace| This target should be used when your primary need is to analyze and review secure audits from the Microsoft Entra admin center directly.<br /><br />Before you enable Microsoft Entra DS security audit events, [Create a Log Analytics workspace in the Microsoft Entra admin center.](../azure-monitor/logs/quick-create-workspace.md)|
+|Azure Storage| This target should be used when your primary need is to store security audit events for archival purposes. Other targets can be used for archival purposes, however those targets provide capabilities beyond the primary need of archiving. <br /><br />Before you enable Domain Services security audit events, first [Create an Azure Storage account](../storage/common/storage-account-create.md).|
+|Azure Event Hubs| This target should be used when your primary need is to share security audit events with additional software such as data analysis software or security information & event management (SIEM) software.<br /><br />Before you enable Domain Services security audit events, [Create an event hub using Microsoft Entra admin center](../event-hubs/event-hubs-create.md)|
+|Azure Log Analytics Workspace| This target should be used when your primary need is to analyze and review secure audits from the Microsoft Entra admin center directly.<br /><br />Before you enable Domain Services security audit events, [Create a Log Analytics workspace in the Microsoft Entra admin center.](../azure-monitor/logs/quick-create-workspace.md)|
## Enable security audit events using the Microsoft Entra admin center
-To enable Microsoft Entra DS security audit events using the Microsoft Entra admin center, complete the following steps.
+To enable Domain Services security audit events using the Microsoft Entra admin center, complete the following steps.
> [!IMPORTANT]
-> Microsoft Entra DS security audits aren't retroactive. You can't retrieve or replay events from the past. Microsoft Entra DS can only send events that occur after security audits are enabled.
+> Domain Services security audits aren't retroactive. You can't retrieve or replay events from the past. Domain Services can only send events that occur after security audits are enabled.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Global Administrator. 1. Search for and select **Microsoft Entra Domain Services**. Choose your managed domain, such as *aaddscontoso.com*.
-1. In the Microsoft Entra DS window, select **Diagnostic settings** on the left-hand side.
+1. In the Domain Services window, select **Diagnostic settings** on the left-hand side.
1. No diagnostics are configured by default. To get started, select **Add diagnostic setting**. ![Add a diagnostic setting for Microsoft Entra Domain Services](./media/security-audit-events/add-diagnostic-settings.png)
To enable Microsoft Entra DS security audit events using the Microsoft Entra adm
You can select different log categories for each targeted resource within a single configuration. This ability lets you choose which logs categories you want to keep for Log Analytics and which logs categories you want to archive, for example.
-1. When done, select **Save** to commit your changes. The target resources start to receive Microsoft Entra DS audit events soon after the configuration is saved.
+1. When done, select **Save** to commit your changes. The target resources start to receive Domain Services audit events soon after the configuration is saved.
## Enable security and DNS audit events using Azure PowerShell
-To enable Microsoft Entra DS security and DNS audit events using Azure PowerShell, complete the following steps. If needed, first [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-azure-powershell).
+To enable Domain Services security and DNS audit events using Azure PowerShell, complete the following steps. If needed, first [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-azure-powershell).
> [!IMPORTANT]
-> Microsoft Entra DS audits aren't retroactive. You can't retrieve or replay events from the past. Microsoft Entra DS can only send events that occur after audits are enabled.
+> Domain Services audits aren't retroactive. You can't retrieve or replay events from the past. Domain Services can only send events that occur after audits are enabled.
1. Authenticate to your Azure subscription using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet. When prompted, enter your account credentials.
To enable Microsoft Entra DS security and DNS audit events using Azure PowerShel
* **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md). * **Azure storage** - [Create a storage account using Azure PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell)
- * **Azure event hubs** - [Create an event hub using Azure PowerShell](../event-hubs/event-hubs-quickstart-powershell.md). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Microsoft Entra DS permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights.
+ * **Azure event hubs** - [Create an event hub using Azure PowerShell](../event-hubs/event-hubs-quickstart-powershell.md). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Domain Services permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights.
> [!IMPORTANT] > Ensure you set the authorization rule on the event hub namespace and not the event hub itself.
-1. Get the resource ID for your Microsoft Entra DS managed domain using the [Get-AzResource](/powershell/module/Az.Resources/Get-AzResource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value:
+1. Get the resource ID for your Domain Services managed domain using the [Get-AzResource](/powershell/module/Az.Resources/Get-AzResource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value:
```azurepowershell $aadds = Get-AzResource -name aaddsDomainName
Log Analytic workspaces let you view and analyze the security and DNS audit even
* [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md) * [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md)
-The following sample queries can be used to start analyzing audit events from Microsoft Entra DS.
+The following sample queries can be used to start analyzing audit events from Domain Services.
### Sample query 1
AADDomainServicesAccountLogon
## Audit security and DNS event categories
-Microsoft Entra DS security and DNS audits align with traditional auditing for traditional AD DS domain controllers. In hybrid environments, you can reuse existing audit patterns so the same logic may be used when analyzing the events. Depending on the scenario you need to troubleshoot or analyze, the different audit event categories need to be targeted.
+Domain Services security and DNS audits align with traditional auditing for traditional AD DS domain controllers. In hybrid environments, you can reuse existing audit patterns so the same logic may be used when analyzing the events. Depending on the scenario you need to troubleshoot or analyze, the different audit event categories need to be targeted.
The following audit event categories are available:
The following audit event categories are available:
## Event IDs per category
- Microsoft Entra DS security and DNS audits record the following event IDs when the specific action triggers an auditable event:
+ Domain Services security and DNS audits record the following event IDs when the specific action triggers an auditable event:
| Event Category Name | Event IDs | |:|:|
active-directory-domain-services Suspension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/suspension.md
Title: Suspended domains in Microsoft Entra Domain Services | Microsoft Docs
-description: Learn about the different health states for a Microsoft Entra DS managed domain and how to restore a suspended domain.
+description: Learn about the different health states for a Microsoft Entra Domain Services managed domain and how to restore a suspended domain.
# Understand the health states and resolve suspended domains in Microsoft Entra Domain Services
-When Microsoft Entra Domain Services (Microsoft Entra DS) is unable to service a managed domain for a long period of time, it puts the managed domain into a suspended state. If a managed domain remains in a suspended state, it's automatically deleted. To keep your Microsoft Entra DS managed domain healthy and avoid suspension, resolve any alerts as quickly as you can.
+When Microsoft Entra Domain Services is unable to service a managed domain for a long period of time, it puts the managed domain into a suspended state. If a managed domain remains in a suspended state, it's automatically deleted. To keep your Domain Services managed domain healthy and avoid suspension, resolve any alerts as quickly as you can.
This article explains why managed domains are suspended, and how to recover a suspended domain.
When a managed domain is in the *Needs Attention* state, the Azure platform may
A managed domain enters the **Suspended** state for one of the following reasons: * One or more critical alerts haven't been resolved in 15 days.
- * Critical alerts can be caused by a misconfiguration that blocks access to resources that are needed by Microsoft Entra DS. For example, the alert [AADDS104: Network Error][alert-nsg] has been unresolved for more than 15 days in the managed domain.
+ * Critical alerts can be caused by a misconfiguration that blocks access to resources that are needed by Domain Services. For example, the alert [AADDS104: Network Error][alert-nsg] has been unresolved for more than 15 days in the managed domain.
* There's a billing issue with the Azure subscription or the Azure subscription has expired. Managed domains are suspended when the Azure platform can't manage, monitor, patch, or back up the domain. A managed domain stays in a *Suspended* state for 15 days. To maintain access to the managed domain, resolve critical alerts immediately.
The following behavior is experienced when a managed domain is in the *Suspended
### How do you know if your managed domain is suspended?
-You see an [alert][resolve-alerts] on the Microsoft Entra DS Health page in the Microsoft Entra admin center that notes the domain is suspended. The state of the domain also shows *Suspended*.
+You see an [alert][resolve-alerts] on the Domain Services Health page in the Microsoft Entra admin center that notes the domain is suspended. The state of the domain also shows *Suspended*.
### Restore a suspended domain
If a managed domain stays in the *Suspended* state for 15 days, it's deleted. Th
When a managed domain enters the *Deleted* state, the following behavior is seen: * All resources and backups for the managed domain are deleted.
-* You can't restore the managed domain. You must create a replacement managed domain to reuse Microsoft Entra DS.
+* You can't restore the managed domain. You must create a replacement managed domain to reuse Domain Services.
* After it's deleted, you aren't billed for the managed domain. ## Next steps
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
Title: How synchronization works in Microsoft Entra Domain Services | Microsoft Docs
-description: Learn how the synchronization process works for objects and credentials from a Microsoft Entra tenant or on-premises Active Directory Domain Services environment to a Microsoft Entra Domain Services managed domain.
+description: Learn how the synchronization process works between Microsoft Entra or an on-premises environment to a Microsoft Entra Domain Services managed domain.
Previously updated : 04/03/2023 Last updated : 09/21/2023 # How objects and credentials are synchronized in a Microsoft Entra Domain Services managed domain
-Objects and credentials in a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain can either be created locally within the domain, or synchronized from a Microsoft Entra tenant. When you first deploy Microsoft Entra DS, an automatic one-way synchronization is configured and started to replicate the objects from Microsoft Entra ID. This one-way synchronization continues to run in the background to keep the Microsoft Entra DS managed domain up-to-date with any changes from Microsoft Entra ID. No synchronization occurs from Microsoft Entra DS back to Microsoft Entra ID.
+Objects and credentials in a Microsoft Entra Domain Services managed domain can either be created locally within the domain, or synchronized from a Microsoft Entra tenant. When you first deploy Domain Services, an automatic one-way synchronization is configured and started to replicate the objects from Microsoft Entra ID. This one-way synchronization continues to run in the background to keep the Domain Services managed domain up-to-date with any changes from Microsoft Entra ID. No synchronization occurs from Domain Services back to Microsoft Entra ID.
In a hybrid environment, objects and credentials from an on-premises AD DS domain can be synchronized to Microsoft Entra ID using Microsoft Entra Connect. Once those objects are successfully synchronized to Microsoft Entra ID, the automatic background sync then makes those objects and credentials available to applications using the managed domain.
-The following diagram illustrates how synchronization works between Microsoft Entra DS, Microsoft Entra ID, and an optional on-premises AD DS environment:
+The following diagram illustrates how synchronization works between Domain Services, Microsoft Entra ID, and an optional on-premises AD DS environment:
![Synchronization overview for a Microsoft Entra Domain Services managed domain](./media/active-directory-domain-services-design-guide/sync-topology.png) <a name='synchronization-from-azure-ad-to-azure-ad-ds'></a>
-## Synchronization from Microsoft Entra ID to Microsoft Entra DS
+## Synchronization from Microsoft Entra ID to Domain Services
-User accounts, group memberships, and credential hashes are synchronized one way from Microsoft Entra ID to Microsoft Entra DS. This synchronization process is automatic. You don't need to configure, monitor, or manage this synchronization process. The initial synchronization may take a few hours to a couple of days, depending on the number of objects in the Microsoft Entra directory. After the initial synchronization is complete, changes that are made in Microsoft Entra ID, such as password or attribute changes, are then automatically synchronized to Microsoft Entra DS.
+User accounts, group memberships, and credential hashes are synchronized one way from Microsoft Entra ID to Domain Services. This synchronization process is automatic. You don't need to configure, monitor, or manage this synchronization process. The initial synchronization may take a few hours to a couple of days, depending on the number of objects in the Microsoft Entra directory. After the initial synchronization is complete, changes that are made in Microsoft Entra ID, such as password or attribute changes, are then automatically synchronized to Domain Services.
-When a user is created in Microsoft Entra ID, they're not synchronized to Microsoft Entra DS until they change their password in Microsoft Entra ID. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The password hashes are needed to successfully authenticate a user in Microsoft Entra DS.
+When a user is created in Microsoft Entra ID, they're not synchronized to Domain Services until they change their password in Microsoft Entra ID. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The password hashes are needed to successfully authenticate a user in Domain Services.
-The synchronization process is one-way by design. There's no reverse synchronization of changes from Microsoft Entra DS back to Microsoft Entra ID. A managed domain is largely read-only except for custom OUs that you can create. You can't make changes to user attributes, user passwords, or group memberships within a managed domain.
+The synchronization process is one-way by design. There's no reverse synchronization of changes from Domain Services back to Microsoft Entra ID. A managed domain is largely read-only except for custom OUs that you can create. You can't make changes to user attributes, user passwords, or group memberships within a managed domain.
## Scoped synchronization and group filter
You can scope synchronization to only user accounts that originated in the cloud
<a name='attribute-synchronization-and-mapping-to-azure-ad-ds'></a>
-## Attribute synchronization and mapping to Microsoft Entra DS
+## Attribute synchronization and mapping to Domain Services
-The following table lists some common attributes and how they're synchronized to Microsoft Entra DS.
+The following table lists some common attributes and how they're synchronized to Domain Services.
-| Attribute in Microsoft Entra DS | Source | Notes |
+| Attribute in Domain Services | Source | Notes |
|: |: |: |
-| UPN | User's *UPN* attribute in Microsoft Entra tenant | The UPN attribute from the Microsoft Entra tenant is synchronized as-is to Microsoft Entra DS. The most reliable way to sign in to a managed domain is using the UPN. |
+| UPN | User's *UPN* attribute in Microsoft Entra tenant | The UPN attribute from the Microsoft Entra tenant is synchronized as-is to Domain Services. The most reliable way to sign in to a managed domain is using the UPN. |
| SAMAccountName | User's *mailNickname* attribute in Microsoft Entra tenant or autogenerated | The *SAMAccountName* attribute is sourced from the *mailNickname* attribute in the Microsoft Entra tenant. If multiple user accounts have the same *mailNickname* attribute, the *SAMAccountName* is autogenerated. If the user's *mailNickname* or *UPN* prefix is longer than 20 characters, the *SAMAccountName* is autogenerated to meet the 20 character limit on *SAMAccountName* attributes. | | Passwords | User's password from the Microsoft Entra tenant | Legacy password hashes required for NTLM or Kerberos authentication are synchronized from the Microsoft Entra tenant. If the Microsoft Entra tenant is configured for hybrid synchronization using Microsoft Entra Connect, these password hashes are sourced from the on-premises AD DS environment. |
-| Primary user/group SID | Autogenerated | The primary SID for user/group accounts is autogenerated in Microsoft Entra DS. This attribute doesn't match the primary user/group SID of the object in an on-premises AD DS environment. This mismatch is because the managed domain has a different SID namespace than the on-premises AD DS domain. |
-| SID history for users and groups | On-premises primary user and group SID | The *SidHistory* attribute for users and groups in Microsoft Entra DS is set to match the corresponding primary user or group SID in an on-premises AD DS environment. This feature helps make lift-and-shift of on-premises applications to Microsoft Entra DS easier as you don't need to re-ACL resources. |
+| Primary user/group SID | Autogenerated | The primary SID for user/group accounts is autogenerated in Domain Services. This attribute doesn't match the primary user/group SID of the object in an on-premises AD DS environment. This mismatch is because the managed domain has a different SID namespace than the on-premises AD DS domain. |
+| SID history for users and groups | On-premises primary user and group SID | The *SidHistory* attribute for users and groups in Domain Services is set to match the corresponding primary user or group SID in an on-premises AD DS environment. This feature helps make lift-and-shift of on-premises applications to Domain Services easier as you don't need to re-ACL resources. |
> [!TIP] > **Sign in to the managed domain using the UPN format** The *SAMAccountName* attribute, such as `AADDSCONTOSO\driley`, may be auto-generated for some user accounts in a managed domain. Users' auto-generated *SAMAccountName* may differ from their UPN prefix, so isn't always a reliable way to sign in.
The following table lists some common attributes and how they're synchronized to
### Attribute mapping for user accounts
-The following table illustrates how specific attributes for user objects in Microsoft Entra ID are synchronized to corresponding attributes in Microsoft Entra DS.
+The following table illustrates how specific attributes for user objects in Microsoft Entra ID are synchronized to corresponding attributes in Domain Services.
-| User attribute in Microsoft Entra ID | User attribute in Microsoft Entra DS |
+| User attribute in Microsoft Entra ID | User attribute in Domain Services |
|: |: | | accountEnabled |userAccountControl (sets or clears the ACCOUNT_DISABLED bit) | | city |l |
The following table illustrates how specific attributes for user objects in Micr
### Attribute mapping for groups
-The following table illustrates how specific attributes for group objects in Microsoft Entra ID are synchronized to corresponding attributes in Microsoft Entra DS.
+The following table illustrates how specific attributes for group objects in Microsoft Entra ID are synchronized to corresponding attributes in Domain Services.
-| Group attribute in Microsoft Entra ID | Group attribute in Microsoft Entra DS |
+| Group attribute in Microsoft Entra ID | Group attribute in Domain Services |
|: |: | | displayName |displayName | | displayName |SAMAccountName (may sometimes be autogenerated) |
The following table illustrates how specific attributes for group objects in Mic
<a name='synchronization-from-on-premises-ad-ds-to-azure-ad-and-azure-ad-ds'></a>
-## Synchronization from on-premises AD DS to Microsoft Entra ID and Microsoft Entra DS
+## Synchronization from on-premises AD DS to Microsoft Entra ID and Domain Services
-Microsoft Entra Connect is used to synchronize user accounts, group memberships, and credential hashes from an on-premises AD DS environment to Microsoft Entra ID. Attributes of user accounts such as the UPN and on-premises security identifier (SID) are synchronized. To sign in using Microsoft Entra DS, legacy password hashes required for NTLM and Kerberos authentication are also synchronized to Microsoft Entra ID.
+Microsoft Entra Connect is used to synchronize user accounts, group memberships, and credential hashes from an on-premises AD DS environment to Microsoft Entra ID. Attributes of user accounts such as the UPN and on-premises security identifier (SID) are synchronized. To sign in using Domain Services, legacy password hashes required for NTLM and Kerberos authentication are also synchronized to Microsoft Entra ID.
> [!IMPORTANT] > Microsoft Entra Connect should only be installed and configured for synchronization with on-premises AD DS environments. It's not supported to install Microsoft Entra Connect in a managed domain to synchronize objects back to Microsoft Entra ID.
Many organizations have a fairly complex on-premises AD DS environment that incl
Microsoft Entra ID has a much simpler and flat namespace. To enable users to reliably access applications secured by Microsoft Entra ID, resolve UPN conflicts across user accounts in different forests. Managed domains use a flat OU structure, similar to Microsoft Entra ID. All user accounts and groups are stored in the *AADDC Users* container, despite being synchronized from different on-premises domains or forests, even if you've configured a hierarchical OU structure on-premises. The managed domain flattens any hierarchical OU structures.
-As previously detailed, there's no synchronization from Microsoft Entra DS back to Microsoft Entra ID. You can [create a custom Organizational Unit (OU)](create-ou.md) in Microsoft Entra DS and then users, groups, or service accounts within those custom OUs. None of the objects created in custom OUs are synchronized back to Microsoft Entra ID. These objects are available only within the managed domain, and aren't visible using Azure AD PowerShell cmdlets, Microsoft Graph API, or using the Microsoft Entra management UI.
+As previously detailed, there's no synchronization from Domain Services back to Microsoft Entra ID. You can [create a custom Organizational Unit (OU)](create-ou.md) in Domain Services and then users, groups, or service accounts within those custom OUs. None of the objects created in custom OUs are synchronized back to Microsoft Entra ID. These objects are available only within the managed domain, and aren't visible using Microsoft Graph PowerShell cmdlets, Microsoft Graph API, or using the Microsoft Entra admin center.
<a name='what-isnt-synchronized-to-azure-ad-ds'></a>
-## What isn't synchronized to Microsoft Entra DS
+## What isn't synchronized to Domain Services
-The following objects or attributes aren't synchronized from an on-premises AD DS environment to Microsoft Entra ID or Microsoft Entra DS:
+The following objects or attributes aren't synchronized from an on-premises AD DS environment to Microsoft Entra ID or Domain
-* **Excluded attributes:** You can choose to exclude certain attributes from synchronizing to Microsoft Entra ID from an on-premises AD DS environment using Microsoft Entra Connect. These excluded attributes aren't then available in Microsoft Entra DS.
-* **Group Policies:** Group Policies configured in an on-premises AD DS environment aren't synchronized to Microsoft Entra DS.
-* **Sysvol folder:** The contents of the *Sysvol* folder in an on-premises AD DS environment aren't synchronized to Microsoft Entra DS.
-* **Computer objects:** Computer objects for computers joined to an on-premises AD DS environment aren't synchronized to Microsoft Entra DS. These computers don't have a trust relationship with the managed domain and only belong to the on-premises AD DS environment. In Microsoft Entra DS, only computer objects for computers that have explicitly domain-joined to the managed domain are shown.
-* **SidHistory attributes for users and groups:** The primary user and primary group SIDs from an on-premises AD DS environment are synchronized to Microsoft Entra DS. However, existing *SidHistory* attributes for users and groups aren't synchronized from the on-premises AD DS environment to Microsoft Entra DS.
-* **Organization Units (OU) structures:** Organizational Units defined in an on-premises AD DS environment don't synchronize to Microsoft Entra DS. There are two built-in OUs in Microsoft Entra DS - one for users, and one for computers. The managed domain has a flat OU structure. You can choose to [create a custom OU in your managed domain](create-ou.md).
+* **Excluded attributes:** You can choose to exclude certain attributes from synchronizing to Microsoft Entra ID from an on-premises AD DS environment using Microsoft Entra Connect. These excluded attributes aren't then available in Domain Services.
+* **Group Policies:** Group Policies configured in an on-premises AD DS environment aren't synchronized to Domain Services.
+* **Sysvol folder:** The contents of the *Sysvol* folder in an on-premises AD DS environment aren't synchronized to Domain Services.
+* **Computer objects:** Computer objects for computers joined to an on-premises AD DS environment aren't synchronized to Domain Services. These computers don't have a trust relationship with the managed domain and only belong to the on-premises AD DS environment. In Domain Services, only computer objects for computers that have explicitly domain-joined to the managed domain are shown.
+* **SidHistory attributes for users and groups:** The primary user and primary group SIDs from an on-premises AD DS environment are synchronized to Domain Services. However, existing *SidHistory* attributes for users and groups aren't synchronized from the on-premises AD DS environment to Domain Services.
+* **Organization Units (OU) structures:** Organizational Units defined in an on-premises AD DS environment don't synchronize to Domain Services. There are two built-in OUs in Domain Services - one for users, and one for computers. The managed domain has a flat OU structure. You can choose to [create a custom OU in your managed domain](create-ou.md).
## Password hash synchronization and security considerations
-When you enable Microsoft Entra DS, legacy password hashes for NTLM and Kerberos authentication are required. Microsoft Entra ID doesn't store clear-text passwords, so these hashes can't be automatically generated for existing user accounts. NTLM and Kerberos compatible password hashes are always stored in an encrypted manner in Microsoft Entra ID.
+When you enable Domain Services, legacy password hashes for NTLM and Kerberos authentication are required. Microsoft Entra ID doesn't store clear-text passwords, so these hashes can't be automatically generated for existing user accounts. NTLM and Kerberos compatible password hashes are always stored in an encrypted manner in Microsoft Entra ID.
-The encryption keys are unique to each Microsoft Entra tenant. These hashes are encrypted such that only Microsoft Entra DS has access to the decryption keys. No other service or component in Microsoft Entra ID has access to the decryption keys.
+The encryption keys are unique to each Microsoft Entra tenant. These hashes are encrypted such that only Domain Services has access to the decryption keys. No other service or component in Microsoft Entra ID has access to the decryption keys.
-Legacy password hashes are then synchronized from Microsoft Entra ID into the domain controllers for a managed domain. The disks for these managed domain controllers in Microsoft Entra DS are encrypted at rest. These password hashes are stored and secured on these domain controllers similar to how passwords are stored and secured in an on-premises AD DS environment.
+Legacy password hashes are then synchronized from Microsoft Entra ID into the domain controllers for a managed domain. The disks for these managed domain controllers in Domain Services are encrypted at rest. These password hashes are stored and secured on these domain controllers similar to how passwords are stored and secured in an on-premises AD DS environment.
-For cloud-only Microsoft Entra environments, [users must reset/change their password](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) in order for the required password hashes to be generated and stored in Microsoft Entra ID. For any cloud user account created in Microsoft Entra ID after enabling Microsoft Entra Domain Services, the password hashes are generated and stored in the NTLM and Kerberos compatible formats. All cloud user accounts must change their password before they're synchronized to Microsoft Entra DS.
+For cloud-only Microsoft Entra environments, [users must reset/change their password](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) in order for the required password hashes to be generated and stored in Microsoft Entra ID. For any cloud user account created in Microsoft Entra ID after enabling Microsoft Entra Domain Services, the password hashes are generated and stored in the NTLM and Kerberos compatible formats. All cloud user accounts must change their password before they're synchronized to Domain Services.
For hybrid user accounts synced from on-premises AD DS environment using Microsoft Entra Connect, you must [configure Microsoft Entra Connect to synchronize password hashes in the NTLM and Kerberos compatible formats](tutorial-configure-password-hash-sync.md).
For hybrid user accounts synced from on-premises AD DS environment using Microso
For more information on the specifics of password synchronization, see [How password hash synchronization works with Microsoft Entra Connect](../active-directory/hybrid/how-to-connect-password-hash-synchronization.md?context=/azure/active-directory-domain-services/context/azure-ad-ds-context).
-To get started with Microsoft Entra DS, [create a managed domain](tutorial-create-instance.md).
+To get started with Domain Services, [create a managed domain](tutorial-create-instance.md).
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
# Create a Microsoft Entra Domain Services managed domain using an Azure Resource Manager template
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Microsoft Entra DS integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Domain Services integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
This article shows you how to create a managed domain using an Azure Resource Manager template. Supporting resources are created using Azure PowerShell.
To complete this article, you need the following resources:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Microsoft Entra DS.
-* You need Domain Services Contributor Azure role to create the required Microsoft Entra DS resources.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need Domain Services Contributor Azure role to create the required Domain Services resources.
## DNS naming requirements
-When you create a Microsoft Entra DS managed domain, you specify a DNS name. There are some considerations when you choose this DNS name:
+When you create a Domain Services managed domain, you specify a DNS name. There are some considerations when you choose this DNS name:
* **Built-in domain name:** By default, the built-in domain name of the directory is used (a *.onmicrosoft.com* suffix). If you wish to enable secure LDAP access to the managed domain over the internet, you can't create a digital certificate to secure the connection with this default domain. Microsoft owns the *.onmicrosoft.com* domain, so a Certificate Authority (CA) won't issue a certificate. * **Custom domain names:** The most common approach is to specify a custom domain name, typically one that you already own and is routable. When you use a routable, custom domain, traffic can correctly flow as needed to support your applications.
The following DNS name restrictions also apply:
## Create required Microsoft Entra resources
-Microsoft Entra DS requires a service principal and a Microsoft Entra group. These resources let the managed domain synchronize data, and define which users have administrative permissions in the managed domain.
+Domain Services requires a service principal and a Microsoft Entra group. These resources let the managed domain synchronize data, and define which users have administrative permissions in the managed domain.
First, register the Microsoft Entra Domain Services resource provider using the [Register-AzResourceProvider][Register-AzResourceProvider] cmdlet:
First, register the Microsoft Entra Domain Services resource provider using the
Register-AzResourceProvider -ProviderNamespace Microsoft.AAD ```
-Create a Microsoft Entra service principal using the [New-AzureADServicePrincipal][New-AzureADServicePrincipal] cmdlet for Microsoft Entra DS to communicate and authenticate itself. A specific application ID is used named *Domain Controller Services* with an ID of *2565bd9d-da50-47d4-8b85-4c97f669dc36* for Azure Global. For other Azure clouds, search for AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*.
+Create a Microsoft Entra service principal using the [New-AzureADServicePrincipal][New-AzureADServicePrincipal] cmdlet for Domain Services to communicate and authenticate itself. A specific application ID is used named *Domain Controller Services* with an ID of *2565bd9d-da50-47d4-8b85-4c97f669dc36* for Azure Global. For other Azure clouds, search for AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*.
```powershell New-AzureADServicePrincipal -AppId "2565bd9d-da50-47d4-8b85-4c97f669dc36"
New-AzResourceGroup `
-Location "WestUS" ```
-If you choose a region that supports Availability Zones, the Microsoft Entra DS resources are distributed across zones for additional redundancy. Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
+If you choose a region that supports Availability Zones, the Domain Services resources are distributed across zones for additional redundancy. Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
-There's nothing for you to configure for Microsoft Entra DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones].
+There's nothing for you to configure for Domain Services to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones].
<a name='resource-definition-for-azure-ad-ds'></a>
-## Resource definition for Microsoft Entra DS
+## Resource definition for Domain Services
As part of the Resource Manager resource definition, the following configuration parameters are required: | Parameter | Value | |-|| | domainName | The DNS domain name for your managed domain, taking into consideration the previous points on naming prefixes and conflicts. |
-| filteredSync | Microsoft Entra DS lets you synchronize *all* users and groups available in Microsoft Entra ID, or a *scoped* synchronization of only specific groups.<br /><br /> For more information about scoped synchronization, see [Microsoft Entra Domain Services scoped synchronization][scoped-sync].|
+| filteredSync | Domain Services lets you synchronize *all* users and groups available in Microsoft Entra ID, or a *scoped* synchronization of only specific groups.<br /><br /> For more information about scoped synchronization, see [Microsoft Entra Domain Services scoped synchronization][scoped-sync].|
| notificationSettings | If there are any alerts generated in the managed domain, email notifications can be sent out. <br /><br />*Global administrators* of the Azure tenant and members of the *AAD DC Administrators* group can be *Enabled* for these notifications.<br /><br /> If desired, you can add additional recipients for notifications when there are alerts that require attention.|
-| domainConfigurationType | By default, a managed domain is created as a *User* forest. This type of forest synchronizes all objects from Microsoft Entra ID, including any user accounts created in an on-premises AD DS environment. You don't need to specify a *domainConfiguration* value to create a user forest.<br /><br /> A *Resource* forest only synchronizes users and groups created directly in Microsoft Entra ID. Set the value to *ResourceTrusting* to create a resource forest.<br /><br />For more information on *Resource* forests, including why you may use one and how to create forest trusts with on-premises AD DS domains, see [Microsoft Entra DS resource forests overview][resource-forests].|
+| domainConfigurationType | By default, a managed domain is created as a *User* forest. This type of forest synchronizes all objects from Microsoft Entra ID, including any user accounts created in an on-premises AD DS environment. You don't need to specify a *domainConfiguration* value to create a user forest.<br /><br /> A *Resource* forest only synchronizes users and groups created directly in Microsoft Entra ID. Set the value to *ResourceTrusting* to create a resource forest.<br /><br />For more information on *Resource* forests, including why you may use one and how to create forest trusts with on-premises AD DS domains, see [Domain Services resource forests overview][resource-forests].|
The following condensed parameters definition shows how these values are declared. A user forest named *aaddscontoso.com* is created with all users from Azure AD synchronized to the managed domain:
When the Microsoft Entra admin center shows that the managed domain has finished
* Update DNS settings for the virtual network so virtual machines can find the managed domain for domain join or authentication. * To configure DNS, select your managed domain in the portal. On the **Overview** window, you are prompted to automatically configure these DNS settings.
-* [Enable password synchronization to Microsoft Entra DS](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) so end users can sign in to the managed domain using their corporate credentials.
+* [Enable password synchronization to Domain Services](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) so end users can sign in to the managed domain using their corporate credentials.
## Next steps
active-directory-domain-services Troubleshoot Account Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-account-lockout.md
Previously updated : 01/29/2023 Last updated : 09/21/2023 #Customer intent: As a directory administrator, I want to troubleshoot why user accounts are locked out in a Microsoft Entra Domain Services managed domain.
# Troubleshoot account lockout problems with a Microsoft Entra Domain Services managed domain
-To prevent repeated malicious sign-in attempts, a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain locks accounts after a defined threshold. This account lockout can also happen by accident without a sign-in attack incident. For example, if a user repeatedly enters the wrong password or a service attempts to use an old password, the account gets locked out.
+To prevent repeated malicious sign-in attempts, a Microsoft Entra Domain Services managed domain locks accounts after a defined threshold. This account lockout can also happen by accident without a sign-in attack incident. For example, if a user repeatedly enters the wrong password or a service attempts to use an old password, the account gets locked out.
This troubleshooting article outlines why account lockouts happen and how you can configure the behavior, and how to review security audits to troubleshoot lockout events. ## What is an account lockout?
-A user account in a Microsoft Entra DS managed domain is locked out when a defined threshold for unsuccessful sign-in attempts has been met. This account lockout behavior is designed to protect you from repeated brute-force sign-in attempts that may indicate an automated digital attack.
+A user account in a Domain Services managed domain is locked out when a defined threshold for unsuccessful sign-in attempts has been met. This account lockout behavior is designed to protect you from repeated brute-force sign-in attempts that may indicate an automated digital attack.
**By default, if there are 5 bad password attempts in 2 minutes, the account is locked out for 30 minutes.**
Fine-grained password policies (FGPPs) let you apply specific restrictions for p
Policies are distributed through group association in the managed domain, and any changes you make are applied at the next user sign-in. Changing the policy doesn't unlock a user account that's already locked out.
-For more information on fine-grained password policies, and the differences between users created directly in Microsoft Entra DS versus synchronized in from Microsoft Entra ID, see [Configure password and account lockout policies][configure-fgpp].
+For more information on fine-grained password policies, and the differences between users created directly in Domain Services versus synchronized in from Microsoft Entra ID, see [Configure password and account lockout policies][configure-fgpp].
## Common account lockout reasons
The most common reasons for an account to be locked out, without any malicious i
## Troubleshoot account lockouts with security audits
-To troubleshoot when account lockout events occur and where they're coming from, [enable security audits for Microsoft Entra DS][security-audit-events]. Audit events are only captured from the time you enable the feature. Ideally, you should enable security audits *before* there's an account lockout issue to troubleshoot. If a user account repeatedly has lockout issues, you can enable security audits ready for the next time the situation occurs.
+To troubleshoot when account lockout events occur and where they're coming from, [enable security audits for Domain Services][security-audit-events]. Audit events are only captured from the time you enable the feature. Ideally, you should enable security audits *before* there's an account lockout issue to troubleshoot. If a user account repeatedly has lockout issues, you can enable security audits ready for the next time the situation occurs.
Once you have enabled security audits, the following sample queries show you how to review *Account Lockout Events*, code *4740*.
AADDomainServicesAccountManagement
You may find on 4776 and 4740 event details of "Source Workstation: " empty. This is because the bad password happened over Network logon via some other devices.
-For example, a RADIUS server can forward the authentication to Microsoft Entra DS.
+For example, a RADIUS server can forward the authentication to Domain Services.
03/04 19:07:29 [LOGON] [10752] contoso: SamLogon: Transitive Network logon of contoso\Nagappan.Veerappan from (via LOB11-RADIUS) Entered
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md
# Known issues: Common alerts and resolutions in Microsoft Entra Domain Services
-As a central part of identity and authentication for applications, Microsoft Entra Domain Services (Microsoft Entra DS) sometimes has problems. If you run into issues, there are some common alerts and associated troubleshooting steps to help you get things running again. At any time, you can also [open an Azure support request][azure-support] for additional troubleshooting assistance.
+As a central part of identity and authentication for applications, Microsoft Entra Domain Services sometimes has problems. If you run into issues, there are some common alerts and associated troubleshooting steps to help you get things running again. At any time, you can also [open an Azure support request][azure-support] for more troubleshooting help.
-This article provides troubleshooting information for common alerts in Microsoft Entra DS.
+This article provides troubleshooting information for common alerts in Domain Services.
## AADDS100: Missing directory
This article provides troubleshooting information for common alerts in Microsoft
### Resolution
-This error is usually caused when an Azure subscription is moved to a new Microsoft Entra directory and the old Microsoft Entra directory that's associated with Microsoft Entra DS is deleted.
+This error is usually caused when an Azure subscription is moved to a new Microsoft Entra directory and the old Microsoft Entra directory that's associated with Domain Services is deleted.
-This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it in your new directory. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it in your new directory. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for more troubleshooting help.
## AADDS101: Azure AD B2C is running in this directory
This error is unrecoverable. To resolve the alert, [delete your existing managed
### Resolution
-Microsoft Entra DS automatically synchronizes with a Microsoft Entra directory. If the Microsoft Entra directory is configured for B2C, Microsoft Entra DS can't be deployed and synchronized.
+Domain Services automatically synchronizes with a Microsoft Entra directory. If the Microsoft Entra directory is configured for B2C, Domain Services can't be deployed and synchronized.
-To use Microsoft Entra DS, you must recreate your managed domain in a non-Azure AD B2C directory using the following steps:
+To use Domain Services, you must recreate your managed domain in a non-Azure AD B2C directory using the following steps:
1. [Delete the managed domain](delete-aadds.md) from your existing Microsoft Entra directory. 1. Create a new Microsoft Entra directory that isn't an Azure AD B2C directory.
The managed domain's health automatically updates itself within two hours and re
Before you begin, make sure you understand [private IP v4 address spaces](https://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces).
-Inside a virtual network, VMs can make requests to Azure resources in the same IP address range as configured for the subnet. If you configure a public IP address range for a subnet, requests routed within a virtual network may not reach the intended web resources. This configuration can lead to unpredictable errors with Microsoft Entra DS.
+Inside a virtual network, VMs can make requests to Azure resources in the same IP address range as configured for the subnet. If you configure a public IP address range for a subnet, requests routed within a virtual network may not reach the intended web resources. This configuration can lead to unpredictable errors with Domain Services.
> [!NOTE] > If you own the IP address range in the internet that is configured in your virtual network, this alert can be ignored. However, Microsoft Entra Domain Services can't commit to the [SLA](https://azure.microsoft.com/support/legal/sla/active-directory-ds/v1_0/) with this configuration since it can lead to unpredictable errors.
Inside a virtual network, VMs can make requests to Azure resources in the same I
To resolve this alert, delete your existing managed domain and recreate it in a virtual network with a private IP address range. This process is disruptive as the managed domain is unavailable and any custom resources you've created like OUs or service accounts are lost. 1. [Delete the managed domain](delete-aadds.md) from your directory.
-1. To update the virtual network IP address range, search for and select *Virtual network* in the Microsoft Entra admin center. Select the virtual network for Microsoft Entra DS that incorrectly has a public IP address range set.
+1. To update the virtual network IP address range, search for and select *Virtual network* in the Microsoft Entra admin center. Select the virtual network for Domain Services that incorrectly has a public IP address range set.
1. Under **Settings**, select *Address Space*.
-1. Update the address range by choosing the existing address range and editing it, or adding an additional address range. Make sure the new IP address range is in a private IP range. When ready, **Save** the changes.
+1. Update the address range by choosing the existing address range and editing it, or by adding an address range. Make sure the new IP address range is in a private IP range. When ready, **Save** the changes.
1. Select **Subnets** in the left-hand navigation.
-1. Choose the subnet you wish to edit, or create an additional subnet.
+1. Choose the subnet you wish to edit, or create another subnet.
1. Update or specify a private IP address range then **Save** your changes. 1. [Create a replacement managed domain](tutorial-create-instance.md). Make sure you pick the updated virtual network subnet with a private IP address range.
The managed domain's health automatically updates itself within two hours and re
### Resolution
-Microsoft Entra DS requires an active subscription, and can't be moved to a different subscription. If the Azure subscription that the managed domain was associated with is deleted, you must recreate an Azure subscription and managed domain.
+Domain Services requires an active subscription, and can't be moved to a different subscription. If the Azure subscription that the managed domain was associated with is deleted, you must recreate an Azure subscription and managed domain.
1. [Create an Azure subscription](../cost-management-billing/manage/create-subscription.md). 1. [Delete the managed domain](delete-aadds.md) from your existing Microsoft Entra directory.
Microsoft Entra DS requires an active subscription, and can't be moved to a diff
### Resolution
-Microsoft Entra DS requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription.
+Domain Services requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription.
1. [Renew your Azure subscription](../cost-management-billing/manage/subscription-disabled.md).
-2. Once the subscription is renewed, a Microsoft Entra DS notification lets you re-enable the managed domain.
+2. Once the subscription is renewed, a Domain Services notification lets you re-enable the managed domain.
When the managed domain is enabled again, the managed domain's health automatically updates itself within two hours and removes the alert.
When the managed domain is enabled again, the managed domain's health automatica
### Resolution
-Microsoft Entra DS requires an active subscription, and can't be moved to a different subscription. If the Azure subscription that the managed domain was associated with is moved, move the subscription back to the previous directory, or [delete your managed domain](delete-aadds.md) from the existing directory and [create a replacement managed domain in the chosen subscription](tutorial-create-instance.md).
+Domain Services requires an active subscription, and can't be moved to a different subscription. If the Azure subscription that the managed domain was associated with is moved, move the subscription back to the previous directory, or [delete your managed domain](delete-aadds.md) from the existing directory and [create a replacement managed domain in the chosen subscription](tutorial-create-instance.md).
## AADDS109: Resources for your managed domain cannot be found
Microsoft Entra DS requires an active subscription, and can't be moved to a diff
### Resolution
-Microsoft Entra DS creates additional resources to function properly, such as public IP addresses, virtual network interfaces, and a load balancer. If any of these resources are deleted, the managed domain is in an unsupported state and prevents the domain from being managed. For more information on these resources, see [Network resources used by Microsoft Entra DS](network-considerations.md#network-resources-used-by-azure-ad-ds).
+Domain Services creates resources to function properly, such as public IP addresses, virtual network interfaces, and a load balancer. If any of these resources are deleted, the managed domain is in an unsupported state and prevents the domain from being managed. For more information on these resources, see [Network resources used by Domain Services](network-considerations.md#network-resources-used-by-azure-ad-ds).
This alert is generated when one of these required resources is deleted. If the resource was deleted less than 4 hours ago, there's a chance that the Azure platform can automatically recreate the deleted resource. The following steps outline how to check the health status and timestamp for resource deletion:
This alert is generated when one of these required resources is deleted. If the
### Resolution
-The virtual network subnet for Microsoft Entra DS needs sufficient IP addresses for the automatically created resources. This IP address space includes the need to create replacement resources if there's a maintenance event. To minimize the risk of running out of available IP addresses, don't deploy additional resources, such as your own VMs, into the same virtual network subnet as the managed domain.
+The virtual network subnet for Domain Services needs sufficient IP addresses for the automatically created resources. This IP address space includes the need to create replacement resources if there's a maintenance event. To minimize the risk of running out of available IP addresses, don't deploy other resources, such as your own VMs, into the same virtual network subnet as the managed domain.
-This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for more help.
## AADDS111: Service principal unauthorized
Some automatically generated service principals are used to manage and create re
### Resolution
-The virtual network subnet for Microsoft Entra DS needs enough IP addresses for the automatically created resources. This IP address space includes the need to create replacement resources if there's a maintenance event. To minimize the risk of running out of available IP addresses, don't deploy additional resources, such as your own VMs, into the same virtual network subnet as the managed domain.
+The virtual network subnet for Domain Services needs enough IP addresses for the automatically created resources. This IP address space includes the need to create replacement resources if there's a maintenance event. To minimize the risk of running out of available IP addresses, don't deploy other resources, such as your own VMs, into the same virtual network subnet as the managed domain.
To resolve this alert, delete your existing managed domain and re-create it in a virtual network with a large enough IP address range. This process is disruptive as the managed domain is unavailable and any custom resources you've created like OUs or service accounts are lost. 1. [Delete the managed domain](delete-aadds.md) from your directory. 1. To update the virtual network IP address range, search for and select *Virtual network* in the Microsoft Entra admin center. Select the virtual network for the managed domain that has the small IP address range. 1. Under **Settings**, select *Address Space*.
-1. Update the address range by choosing the existing address range and editing it, or adding an additional address range. Make sure the new IP address range is large enough for the managed domain's subnet range. When ready, **Save** the changes.
+1. Update the address range by choosing the existing address range and editing it, or by adding another address range. Make sure the new IP address range is large enough for the managed domain's subnet range. When ready, **Save** the changes.
1. Select **Subnets** in the left-hand navigation.
-1. Choose the subnet you wish to edit, or create an additional subnet.
+1. Choose the subnet you wish to edit, or create another subnet.
1. Update or specify a large enough IP address range then **Save** your changes. 1. [Create a replacement managed domain](tutorial-create-instance.md). Make sure you pick the updated virtual network subnet with a large enough IP address range.
The managed domain's health automatically updates itself within two hours and re
### Resolution
-Microsoft Entra DS creates additional resources to function properly, such as public IP addresses, virtual network interfaces, and a load balancer. If any of these resources are modified, the managed domain is in an unsupported state and can't be managed. For more information about these resources, see [Network resources used by Microsoft Entra DS](network-considerations.md#network-resources-used-by-azure-ad-ds).
+Domain Services creates resources to function properly, such as public IP addresses, virtual network interfaces, and a load balancer. If any of these resources are modified, the managed domain is in an unsupported state and can't be managed. For more information about these resources, see [Network resources used by Domain Services](network-considerations.md#network-resources-used-by-azure-ad-ds).
-This alert is generated when one of these required resources is modified and can't automatically be recovered by Microsoft Entra DS. To resolve the alert, [open an Azure support request][azure-support] to fix the instance.
+This alert is generated when one of these required resources is modified and can't automatically be recovered by Domain Services. To resolve the alert, [open an Azure support request][azure-support] to fix the instance.
## AADDS114: Subnet invalid
This alert is generated when one of these required resources is modified and can
### Resolution
-This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for more help.
## AADDS115: Resources are locked
This error is unrecoverable. To resolve the alert, [delete your existing managed
### Resolution
-Resource locks can be applied to Azure resources to prevent change or deletion. As Microsoft Entra DS is a managed service, the Azure platform needs the ability to make configuration changes. If a resource lock is applied on some of the Microsoft Entra DS components, the Azure platform can't perform its management tasks.
+Resource locks can be applied to Azure resources to prevent change or deletion. As Domain Services is a managed service, the Azure platform needs the ability to make configuration changes. If a resource lock is applied on some of the Domain Services components, the Azure platform can't perform its management tasks.
-To check for resource locks on the Microsoft Entra DS components and remove them, complete the following steps:
+To check for resource locks on the Domain Services components and remove them, complete the following steps:
1. For each of the managed domain's network components in your resource group, such as virtual network, network interface, or public IP address, check the operation logs in the Microsoft Entra admin center. These operation logs should indicate why an operation is failing and where a resource lock is applied. 1. Select the resource where a lock is applied, then under **Locks**, select and remove the lock(s).
To check for resource locks on the Microsoft Entra DS components and remove them
### Resolution
-Policies are applied to Azure resources and resource groups that control what configuration actions are allowed. As Microsoft Entra DS is a managed service, the Azure platform needs the ability to make configuration changes. If a policy is applied on some of the Microsoft Entra DS components, the Azure platform may not be able to perform its management tasks.
+Policies are applied to Azure resources and resource groups that control what configuration actions are allowed. As Domain Services is a managed service, the Azure platform needs the ability to make configuration changes. If a policy is applied on some of the Domain Services components, the Azure platform may not be able to perform its management tasks.
-To check for applied policies on the Microsoft Entra DS components and update them, complete the following steps:
+To check for applied policies on the Domain Services components and update them, complete the following steps:
1. For each of the managed domain's network components in your resource group, such as virtual network, NIC, or public IP address, check the operation logs in the Microsoft Entra admin center. These operation logs should indicate why an operation is failing and where a restrictive policy is applied. 1. Select the resource where a policy is applied, then under **Policies**, select and edit the policy so it's less restrictive.
To check for applied policies on the Microsoft Entra DS components and update th
>[!WARNING] >If a custom attribute's LDAPName conflicts with an existing AD built-in schema attribute, it can't be onboarded and results in an error. Contact Microsoft Support if your scenario is blocked. For more information, see [Onboarding Custom Attributes](https://aka.ms/aadds-customattr).
-Review the [Microsoft Entra DS Health](check-health.md) alert and see which Microsoft Entra extension properties failed to onboard successfully. Navigate to the **Custom Attributes** page to find the expected Microsoft Entra DS LDAPName of the extension. Make sure the LDAPName doesn't conflict with another AD schema attribute, or that it's one of the allowed built-in AD attributes.
+Review the [Domain Services Health](check-health.md) alert and see which Microsoft Entra extension properties failed to onboard successfully. Navigate to the **Custom Attributes** page to find the expected Domain Services LDAPName of the extension. Make sure the LDAPName doesn't conflict with another AD schema attribute, or that it's one of the allowed built-in AD attributes.
Then follow these steps to retry onboarding the custom attribute in the **Custom Attributes** page:
Then follow these steps to retry onboarding the custom attribute in the **Custom
1. Wait for the health alert to be removed, or verify that the corresponding attributes have been removed from the **AADDSCustomAttributes** OU from a domain-joined VM. 1. Select **Add** and choose the desired attributes again, then click **Save**.
-Upon successful onboarding, Microsoft Entra DS will back fill synchronized users and groups with the onboarded custom attribute values. The custom attribute values appear gradually, depending on the size of the tenant. To check the backfill status, go to [Microsoft Entra DS Health](check-health.md) and verify the **Synchronization with Microsoft Entra ID** monitor timestamp has updated within the last hour.
+Upon successful onboarding, Domain Services will back fill synchronized users and groups with the onboarded custom attribute values. The custom attribute values appear gradually, depending on the size of the tenant. To check the backfill status, go to [Domain Services Health](check-health.md) and verify the **Synchronization with Microsoft Entra ID** monitor timestamp has updated within the last hour.
## AADDS500: Synchronization has not completed in a while
Upon successful onboarding, Microsoft Entra DS will back fill synchronized users
### Resolution
-[Check the Microsoft Entra DS health](check-health.md) for any alerts that indicate problems in the configuration of the managed domain. Problems with the network configuration can block the synchronization from Microsoft Entra ID. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has successfully completed.
+[Check the Domain Services health](check-health.md) for any alerts that indicate problems in the configuration of the managed domain. Problems with the network configuration can block the synchronization from Microsoft Entra ID. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has successfully completed.
The following common reasons cause synchronization to stop in a managed domain:
-* Required network connectivity is blocked. To learn more about how to check the Azure virtual network for problems and what's required, see [troubleshoot network security groups](alert-nsg.md) and the [network requirements for Microsoft Entra DS](network-considerations.md).
+* Required network connectivity is blocked. To learn more about how to check the Azure virtual network for problems and what's required, see [troubleshoot network security groups](alert-nsg.md) and the [network requirements for Domain Services](network-considerations.md).
* Password synchronization wasn't set up or successfully completed when the managed domain was deployed. You can set up password synchronization for [cloud-only users](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) or [hybrid users from on-prem](tutorial-configure-password-hash-sync.md). ## AADDS501: A backup has not been taken in a while
The following common reasons cause synchronization to stop in a managed domain:
### Resolution
-[Check the Microsoft Entra DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. Problems with the network configuration can block the Azure platform from successfully taking backups. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has successfully completed.
+[Check the Domain Services health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. Problems with the network configuration can block the Azure platform from successfully taking backups. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has successfully completed.
## AADDS503: Suspension due to disabled subscription
The following common reasons cause synchronization to stop in a managed domain:
### Resolution > [!WARNING]
-> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Microsoft Entra DS](suspension.md).
+> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Domain Services](suspension.md).
-Microsoft Entra DS requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription.
+Domain Services requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription.
1. [Renew your Azure subscription](../cost-management-billing/manage/subscription-disabled.md).
-2. Once the subscription is renewed, a Microsoft Entra DS notification lets you re-enable the managed domain.
+2. Once the subscription is renewed, a Domain Services notification lets you re-enable the managed domain.
When the managed domain is enabled again, the managed domain's health automatically updates itself within two hours and removes the alert.
When the managed domain is enabled again, the managed domain's health automatica
### Resolution > [!WARNING]
-> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Microsoft Entra DS](suspension.md).
+> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Domain Services](suspension.md).
-[Check the Microsoft Entra DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has completed. When ready, [open an Azure support request][azure-support] to re-enable the managed domain.
+[Check the Domain Services health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has completed. When ready, [open an Azure support request][azure-support] to re-enable the managed domain.
## AADDS600: Unresolved health alerts for 30 days
When the managed domain is enabled again, the managed domain's health automatica
### Resolution > [!WARNING]
-> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Microsoft Entra DS](suspension.md).
+> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Domain Services](suspension.md).
-[Check the Microsoft Entra DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait six hours and check back to see if the alert is removed. [Open an Azure support request][azure-support] if you need assistance.
+[Check the Domain Services health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait six hours and check back to see if the alert is removed. [Open an Azure support request][azure-support] if you need assistance.
## Next steps
-If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+If you still have issues, [open an Azure support request][azure-support] for more troubleshooting help.
<!-- INTERNAL LINKS --> [azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
active-directory-domain-services Troubleshoot Domain Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-domain-join.md
Previously updated : 01/29/2023 Last updated : 09/21/2023 #Customer intent: As a directory administrator, I want to troubleshoot why VMs can't join a Microsoft Entra Domain Services managed domain.
# Troubleshoot domain-join problems with a Microsoft Entra Domain Services managed domain
-When you try to join a virtual machine (VM) or connect an application to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, you may get an error that you're unable to do so. To troubleshoot domain-join problems, review at which of the following points you have an issue:
+When you try to join a virtual machine (VM) or connect an application to a Microsoft Entra Domain Services managed domain, you may get an error that you're unable to do so. To troubleshoot domain-join problems, review at which of the following points you have an issue:
-* If you don't receive an authentication prompt, the VM or application can't connect to the Microsoft Entra DS managed domain.
+* If you don't receive an authentication prompt, the VM or application can't connect to the Domain Services managed domain.
* Start to troubleshoot [connectivity issues for domain-join](#connectivity-issues-for-domain-join). * If you receive an error during authentication, the connection to the managed domain is successful. * Start to troubleshoot [credentials-related issues during domain-join](#credentials-related-issues-during-domain-join).
If the VM can't find the managed domain, there's usually a network connection or
### Network Security Group (NSG) configuration
-When you create a managed domain, a network security group is also created with the appropriate rules for successful domain operation. If you edit or create additional network security group rules, you may unintentionally block ports required for Microsoft Entra DS to provide connection and authentication services. These network security group rules can cause issues such as password sync not completing, users not being able to sign in, or domain-join issues.
+When you create a managed domain, a network security group is also created with the appropriate rules for successful domain operation. If you edit or create additional network security group rules, you may unintentionally block ports required for Domain Services to provide connection and authentication services. These network security group rules can cause issues such as password sync not completing, users not being able to sign in, or domain-join issues.
If you continue to have connection issues, review the following troubleshooting steps:
active-directory-domain-services Troubleshoot Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-sign-in.md
Previously updated : 01/29/2023 Last updated : 09/21/2023 #Customer intent: As a directory administrator, I want to troubleshoot user account sign in problems in a Microsoft Entra Domain Services managed domain.
# Troubleshoot account sign-in problems with a Microsoft Entra Domain Services managed domain
-The most common reasons for a user account that can't sign in to a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain include the following scenarios:
+The most common reasons for a user account that can't sign in to a Microsoft Entra Domain Services managed domain include the following scenarios:
-* [The account isn't synchronized into Microsoft Entra DS yet.](#account-isnt-synchronized-into-azure-ad-ds-yet)
-* [Microsoft Entra DS doesn't have the password hashes to let the account sign in.](#azure-ad-ds-doesnt-have-the-password-hashes)
+* [The account isn't synchronized into Domain Services yet.](#account-isnt-synchronized-into-azure-ad-ds-yet)
+* [Domain Services doesn't have the password hashes to let the account sign in.](#azure-ad-ds-doesnt-have-the-password-hashes)
* [The account is locked out.](#the-account-is-locked-out) > [!TIP]
-> Microsoft Entra DS can't synchronize in credentials for accounts that are external to the Microsoft Entra tenant. External users can't sign in to the Microsoft Entra DS managed domain.
+> Domain Services can't synchronize in credentials for accounts that are external to the Microsoft Entra tenant. External users can't sign in to the Domain Services managed domain.
<a name='account-isnt-synchronized-into-azure-ad-ds-yet'></a>
-## Account isn't synchronized into Microsoft Entra DS yet
+## Account isn't synchronized into Domain Services yet
Depending on the size of your directory, it may take a while for user accounts and credential hashes to be available in a managed domain. For large directories, this initial one-way sync from Microsoft Entra ID can take few hours, and up to a day or two. Make sure that you wait long enough before retrying authentication.
-For hybrid environments that user Microsoft Entra Connect to synchronize on-premises directory data into Microsoft Entra ID, make sure that you run the latest version of Microsoft Entra Connect and have [configured Microsoft Entra Connect to perform a full synchronization after enabling Microsoft Entra DS][azure-ad-connect-phs]. If you disable Microsoft Entra DS and then re-enable, you have to follow these steps again.
+For hybrid environments that user Microsoft Entra Connect to synchronize on-premises directory data into Microsoft Entra ID, make sure that you run the latest version of Microsoft Entra Connect and have [configured Microsoft Entra Connect to perform a full synchronization after enabling Domain Services][azure-ad-connect-phs]. If you disable Domain Services and then re-enable, you have to follow these steps again.
If you continue to have issues with accounts not synchronizing through Microsoft Entra Connect, restart the Azure AD Sync Service. From the computer with Microsoft Entra Connect installed, open a command prompt window, then run the following commands:
net start 'Microsoft Azure AD Sync'
<a name='azure-ad-ds-doesnt-have-the-password-hashes'></a>
-## Microsoft Entra DS doesn't have the password hashes
+## Domain Services doesn't have the password hashes
-Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Microsoft Entra DS for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
+Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Domain Services for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
### Hybrid environments with on-premises synchronization
-For hybrid environments using Microsoft Entra Connect to synchronize from an on-premises AD DS environment, you can locally generate and synchronize the required NTLM or Kerberos password hashes into Microsoft Entra ID. After you create your managed domain, [enable password hash synchronization to Microsoft Entra Domain Services][azure-ad-connect-phs]. Without completing this password hash synchronization step, you can't sign in to an account using the managed domain. If you disable Microsoft Entra DS and then re-enable, you have to follow those steps again.
+For hybrid environments using Microsoft Entra Connect to synchronize from an on-premises AD DS environment, you can locally generate and synchronize the required NTLM or Kerberos password hashes into Microsoft Entra ID. After you create your managed domain, [enable password hash synchronization to Microsoft Entra Domain Services][azure-ad-connect-phs]. Without completing this password hash synchronization step, you can't sign in to an account using the managed domain. If you disable Domain Services and then re-enable, you have to follow those steps again.
-For more information, see [How password hash synchronization works for Microsoft Entra DS][phs-process].
+For more information, see [How password hash synchronization works for Domain Services][phs-process].
### Cloud-only environments with no on-premises synchronization
-Managed domains with no on-premises synchronization, only accounts in Microsoft Entra ID, also need to generate the required NTLM or Kerberos password hashes. If a cloud-only account can't sign in, has a password change process successfully completed for the account after enabling Microsoft Entra DS?
+Managed domains with no on-premises synchronization, only accounts in Microsoft Entra ID, also need to generate the required NTLM or Kerberos password hashes. If a cloud-only account can't sign in, has a password change process successfully completed for the account after enabling Domain Services?
* **No, the password has not been changed.** * [Change the password for the account][enable-user-accounts] to generate the required password hashes, then wait for 15 minutes before you try to sign in again.
- * If you disable Microsoft Entra DS and then re-enable, each account must follow the steps again to change their password and generate the required password hashes.
+ * If you disable Domain Services and then re-enable, each account must follow the steps again to change their password and generate the required password hashes.
* **Yes, the password has been changed.** * Try to sign in using the *UPN* format, such as `driley@aaddscontoso.com`, instead of the *SAMAccountName* format like `AADDSCONTOSO\deeriley`. * The *SAMAccountName* may be automatically generated for users whose UPN prefix is overly long or is the same as another user on the managed domain. The *UPN* format is guaranteed to be unique within a Microsoft Entra tenant.
A user account in a managed domain is locked out when a defined threshold for un
By default, if there are 5 bad password attempts in 2 minutes, the account is locked out for 30 minutes.
-For more information and how to resolve account lockout issues, see [Troubleshoot account lockout problems in Microsoft Entra DS][troubleshoot-account-lockout].
+For more information and how to resolve account lockout issues, see [Troubleshoot account lockout problems in Domain Services][troubleshoot-account-lockout].
## Next steps
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
# Common errors and troubleshooting steps for Microsoft Entra Domain Services
-As a central part of identity and authentication for applications, Microsoft Entra Domain Services (Microsoft Entra DS) sometimes has problems. If you run into issues, there are some common error messages and associated troubleshooting steps to help you get things running again. At any time, you can also [open an Azure support request][azure-support] for additional troubleshooting assistance.
+As a central part of identity and authentication for applications, Microsoft Entra Domain Services sometimes has problems. If you run into issues, there are some common error messages and associated troubleshooting steps to help you get things running again. At any time, you can also [open an Azure support request][azure-support] for more troubleshooting help.
-This article provides troubleshooting steps for common issues in Microsoft Entra DS.
+This article provides troubleshooting steps for common issues in Domain Services.
<a name='you-cannot-enable-azure-ad-domain-services-for-your-azure-ad-directory'></a> ## You cannot enable Microsoft Entra Domain Services for your Microsoft Entra directory
-If you have problems enabling Microsoft Entra DS, review the following common errors and steps to resolve them:
+If you have problems enabling Domain Services, review the following common errors and steps to resolve them:
| **Sample error Message** | **Resolution** | | |: | | *The name aaddscontoso.com is already in use on this network. Specify a name that is not in use.* |[Domain name conflict in the virtual network](troubleshoot.md#domain-name-conflict) |
-| *Domain Services could not be enabled in this Microsoft Entra tenant. The service does not have adequate permissions to the application called 'Microsoft Entra Domain Services Sync'. Delete the application called 'Microsoft Entra Domain Services Sync' and then try to enable Domain Services for your Microsoft Entra tenant.* |[Domain Services doesn't have adequate permissions to the Microsoft Entra Domain Services Sync application](troubleshoot.md#inadequate-permissions) |
+| *Domain Services could not be enabled in this Microsoft Entra tenant. The service does not have adequate permissions to the application called Microsoft Entra Domain Services Sync. Delete the application called 'Microsoft Entra Domain Services Sync' and then try to enable Domain Services for your Microsoft Entra tenant.* |[Domain Services doesn't have adequate permissions to the Microsoft Entra Domain Services Sync application](troubleshoot.md#inadequate-permissions) |
| *Domain Services could not be enabled in this Microsoft Entra tenant. The Domain Services application in your Microsoft Entra tenant does not have the required permissions to enable Domain Services. Delete the application with the application identifier d87dcbc6-a371-462e-88e3-28ad15ec4e64 and then try to enable Domain Services for your Microsoft Entra tenant.* |[The Domain Services application isn't configured properly in your Microsoft Entra tenant](troubleshoot.md#invalid-configuration) | | *Domain Services could not be enabled in this Microsoft Entra tenant. The Microsoft Entra application is disabled in your Microsoft Entra tenant. Enable the application with the application identifier 00000002-0000-0000-c000-000000000000 and then try to enable Domain Services for your Microsoft Entra tenant.* |[The Microsoft Graph application is disabled in your Microsoft Entra tenant](troubleshoot.md#microsoft-graph-disabled) |
If you have problems enabling Microsoft Entra DS, review the following common er
**Resolution**
-Check that you don't have an existing AD DS environment with the same domain name on the same, or a peered, virtual network. For example, you may have an AD DS domain named *aaddscontoso.com* that runs on Azure VMs. When you try to enable a Microsoft Entra DS managed domain with the same domain name of *aaddscontoso.com* on the virtual network, the requested operation fails.
+Check that you don't have an existing AD DS environment with the same domain name on the same, or a peered, virtual network. For example, you may have an AD DS domain named *aaddscontoso.com* that runs on Azure VMs. When you try to enable a Domain Services managed domain with the same domain name of *aaddscontoso.com* on the virtual network, the requested operation fails.
-This failure is due to name conflicts for the domain name on the virtual network. A DNS lookup checks if an existing AD DS environment responds on the requested domain name. To resolve this failure, use a different name to set up your managed domain, or de-provision the existing AD DS domain and then try again to enable Microsoft Entra DS.
+This failure is due to name conflicts for the domain name on the virtual network. A DNS lookup checks if an existing AD DS environment responds on the requested domain name. To resolve this failure, use a different name to set up your managed domain, or deprovision the existing AD DS domain and then try again to enable Domain Services.
### Inadequate permissions **Error message**
-*Domain Services could not be enabled in this Microsoft Entra tenant. The service does not have adequate permissions to the application called 'Microsoft Entra Domain Services Sync'. Delete the application called 'Microsoft Entra Domain Services Sync' and then try to enable Domain Services for your Microsoft Entra tenant.*
+*Domain Services could not be enabled in this Microsoft Entra tenant. The service does not have adequate permissions to the application called Microsoft Entra Domain Services Sync. Delete the application called 'Microsoft Entra Domain Services Sync' and then try to enable Domain Services for your Microsoft Entra tenant.*
**Resolution**
-Check if there's an application named *Microsoft Entra Domain Services Sync* in your Microsoft Entra directory. If this application exists, delete it, then try again to enable Microsoft Entra DS. To check for an existing application and delete it if needed, complete the following steps:
+Check if there's an application named *Microsoft Entra Domain Services Sync* in your Microsoft Entra directory. If this application exists, delete it, then try again to enable Domain Services. To check for an existing application and delete it if needed, complete the following steps:
1. In the [Microsoft Entra admin center](https://entra.microsoft.com), select **Microsoft Entra ID** from the left-hand navigation menu. 1. Select **Enterprise applications**. Choose *All applications* from the **Application Type** drop-down menu, then select **Apply**. 1. In the search box, enter *Microsoft Entra Domain Services Sync*. If the application exists, select it and choose **Delete**.
-1. Once you've deleted the application, try to enable Microsoft Entra DS again.
+1. Once you've deleted the application, try to enable Domain Services again.
### Invalid configuration
Check if there's an application named *Microsoft Entra Domain Services Sync* in
**Resolution**
-Check if you have an existing application named *AzureActiveDirectoryDomainControllerServices* with an application identifier of *d87dcbc6-a371-462e-88e3-28ad15ec4e64* in your Microsoft Entra directory. If this application exists, delete it and then try again to enable Microsoft Entra DS.
+Check if you have an existing application named *AzureActiveDirectoryDomainControllerServices* with an application identifier of *d87dcbc6-a371-462e-88e3-28ad15ec4e64* in your Microsoft Entra directory. If this application exists, delete it and then try again to enable Domain Services.
Use the following PowerShell script to search for an existing application instance and delete it if needed:
To check the status of this application and enable it if needed, complete the fo
1. Choose *All applications* from the **Application Type** drop-down menu, then select **Apply**. 1. In the search box, enter *00000002-0000-0000-c000-00000000000*. Select the application, then choose **Properties**. 1. If **Enabled for users to sign-in** is set to *No*, set the value to *Yes*, then select **Save**.
-1. Once you've enabled the application, try to enable Microsoft Entra DS again.
+1. Once you've enabled the application, try to enable Domain Services again.
<a name='users-are-unable-to-sign-in-to-the-azure-ad-domain-services-managed-domain'></a>
To check the status of this application and enable it if needed, complete the fo
If one or more users in your Microsoft Entra tenant can't sign in to the managed domain, complete the following troubleshooting steps:
-* **Credentials format** - Try using the UPN format to specify credentials, such as `dee@aaddscontoso.onmicrosoft.com`. The UPN format is the recommended way to specify credentials in Microsoft Entra DS. Make sure this UPN is configured correctly in Microsoft Entra ID.
+* **Credentials format** - Try using the UPN format to specify credentials, such as `dee@aaddscontoso.onmicrosoft.com`. The UPN format is the recommended way to specify credentials in Domain Services. Make sure this UPN is configured correctly in Microsoft Entra ID.
The *SAMAccountName* for your account, such as *AADDSCONTOSO\driley* may be autogenerated if there are multiple users with the same UPN prefix in your tenant or if your UPN prefix is overly long. Therefore, the *SAMAccountName* format for your account may be different from what you expect or use in your on-premises domain.
If one or more users in your Microsoft Entra tenant can't sign in to the managed
net start 'Microsoft Azure AD Sync' ```
- * **Cloud-only accounts**: If the affected user account is a cloud-only user account, make sure that the [user has changed their password after you enabled Microsoft Entra DS][cloud-only-passwords]. This password reset causes the required credential hashes for the managed domain to be generated.
+ * **Cloud-only accounts**: If the affected user account is a cloud-only user account, make sure that the [user has changed their password after you enabled Domain Services][cloud-only-passwords]. This password reset causes the required credential hashes for the managed domain to be generated.
* **Verify the user account is active**: By default, five invalid password attempts within 2 minutes on the managed domain cause a user account to be locked out for 30 minutes. The user can't sign in while the account is locked out. After 30 minutes, the user account is automatically unlocked. * Invalid password attempts on the managed domain don't lock out the user account in Microsoft Entra ID. The user account is locked out only within the managed domain. Check the user account status in the *Active Directory Administrative Console (ADAC)* using the [management VM][management-vm], not in Microsoft Entra ID. * You can also [configure fine grained password policies][password-policy] to change the default lockout threshold and duration.
-* **External accounts** - Check that the affected user account isn't an external account in the Microsoft Entra tenant. Examples of external accounts include Microsoft accounts like `dee@live.com` or user accounts from an external Microsoft Entra directory. Microsoft Entra DS doesn't store credentials for external user accounts so they can't sign in to the managed domain.
+* **External accounts** - Check that the affected user account isn't an external account in the Microsoft Entra tenant. Examples of external accounts include Microsoft accounts like `dee@live.com` or user accounts from an external Microsoft Entra directory. Domain Services doesn't store credentials for external user accounts so they can't sign in to the managed domain.
## There are one or more alerts on your managed domain
To fully remove a user account from a managed domain, delete the user permanentl
## Next steps
-If you continue to have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+If you continue to have issues, [open an Azure support request][azure-support] for more troubleshooting help.
<!-- INTERNAL LINKS --> [cloud-only-passwords]: tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds
active-directory-domain-services Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md
# Tutorial: Configure virtual networking for a Microsoft Entra Domain Services managed domain
-To provide connectivity to users and applications, a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain is deployed into an Azure virtual network subnet. This virtual network subnet should only be used for the managed domain resources provided by the Azure platform.
+To provide connectivity to users and applications, a Microsoft Entra Domain Services managed domain is deployed into an Azure virtual network subnet. This virtual network subnet should only be used for the managed domain resources provided by the Azure platform.
-When you create your own VMs and applications, they shouldn't be deployed into the same virtual network subnet. Instead, you should create and deploy your applications into a separate virtual network subnet, or in a separate virtual network that's peered to the Microsoft Entra DS virtual network.
+When you create your own VMs and applications, they shouldn't be deployed into the same virtual network subnet. Instead, you should create and deploy your applications into a separate virtual network subnet, or in a separate virtual network that's peered to the Domain Services virtual network.
-This tutorial shows you how to create and configure a dedicated virtual network subnet or how to peer a different network to the Microsoft Entra DS managed domain's virtual network.
+This tutorial shows you how to create and configure a dedicated virtual network subnet or how to peer a different network to the Domain Services managed domain's virtual network.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Understand the virtual network connectivity options for domain-joined resources to Microsoft Entra DS
-> * Create an IP address range and additional subnet in the Microsoft Entra DS virtual network
-> * Configure virtual network peering to a network that's separate from Microsoft Entra DS
+> * Understand the virtual network connectivity options for domain-joined resources to Domain Services
+> * Create an IP address range and additional subnet in the Domain Services virtual network
+> * Configure virtual network peering to a network that's separate from Domain Services
If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Microsoft Entra DS.
-* You need Domain Services Contributor Azure role to create the required Microsoft Entra DS resources.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need Domain Services Contributor Azure role to create the required Domain Services resources.
* A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, the first tutorial [creates and configures a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
In this tutorial, you create and configure the managed domain using the Microsof
## Application workload connectivity options
-In the previous tutorial, a managed domain was created that used some default configuration options for the virtual network. These default options created an Azure virtual network and virtual network subnet. The Microsoft Entra DS domain controllers that provide the managed domain services are connected to this virtual network subnet.
+In the previous tutorial, a managed domain was created that used some default configuration options for the virtual network. These default options created an Azure virtual network and virtual network subnet. The Domain Services domain controllers that provide the managed domain services are connected to this virtual network subnet.
When you create and run VMs that need to use the managed domain, network connectivity needs to be provided. This network connectivity can be provided in one of the following ways: * Create an additional virtual network subnet in the managed domain's virtual network. This additional subnet is where you create and connect your VMs.
- * As the VMs are part of the same virtual network, they can automatically perform name resolution and communicate with the Microsoft Entra DS domain controllers.
+ * As the VMs are part of the same virtual network, they can automatically perform name resolution and communicate with the Domain Services domain controllers.
* Configure Azure virtual network peering from the managed domain's virtual network to one or more separate virtual networks. These separate virtual networks are where you create and connect your VMs.
- * When you configure virtual network peering, you must also configure DNS settings to use name resolution back to the Microsoft Entra DS domain controllers.
+ * When you configure virtual network peering, you must also configure DNS settings to use name resolution back to the Domain Services domain controllers.
Usually, you only use one of these network connectivity options. The choice is often down to how you wish to manage separate your Azure resources.
-* If you want to manage Microsoft Entra DS and connected VMs as one group of resources, you can create an additional virtual network subnet for VMs.
-* If you want to separate the management of Microsoft Entra DS and then any connected VMs, you can use virtual network peering.
+* If you want to manage Domain Services and connected VMs as one group of resources, you can create an additional virtual network subnet for VMs.
+* If you want to separate the management of Domain Services and then any connected VMs, you can use virtual network peering.
* You may also choose to use virtual network peering to provide connectivity to existing VMs in your Azure environment that are connected to an existing virtual network. In this tutorial, you only need to configure one these virtual network connectivity options.
When you create a VM that needs to use the managed domain, make sure you select
## Configure virtual network peering
-You may have an existing Azure virtual network for VMs, or wish to keep your managed domain virtual network separate. To use the managed domain, VMs in other virtual networks need a way to communicate with the Microsoft Entra DS domain controllers. This connectivity can be provided using Azure virtual network peering.
+You may have an existing Azure virtual network for VMs, or wish to keep your managed domain virtual network separate. To use the managed domain, VMs in other virtual networks need a way to communicate with the Domain Services domain controllers. This connectivity can be provided using Azure virtual network peering.
With Azure virtual network peering, two virtual networks are connected together, without the need for a virtual private network (VPN) device. Network peering lets you quickly connect virtual networks and define traffic flows across your Azure environment.
To peer a virtual network to the managed domain virtual network, complete the fo
Leave any other defaults for virtual network access or forwarded traffic unless you have specific requirements for your environment, then select **OK**.
-1. It takes a few moments to create the peering on both the Microsoft Entra DS virtual network and the virtual network you selected. When ready, the **Peering status** reports *Connected*, as shown in the following example:
+1. It takes a few moments to create the peering on both the Domain Services virtual network and the virtual network you selected. When ready, the **Peering status** reports *Connected*, as shown in the following example:
![Successfully connected peered networks in the Microsoft Entra admin center](./media/tutorial-configure-networking/connected-peering.png)
Before VMs in the peered virtual network can use the managed domain, configure t
### Configure DNS servers in the peered virtual network
-For VMs and applications in the peered virtual network to successfully talk to the managed domain, the DNS settings must be updated. The IP addresses of the Microsoft Entra DS domain controllers must be configured as the DNS servers on the peered virtual network. There are two ways to configure the domain controllers as DNS servers for the peered virtual network:
+For VMs and applications in the peered virtual network to successfully talk to the managed domain, the DNS settings must be updated. The IP addresses of the Domain Services domain controllers must be configured as the DNS servers on the peered virtual network. There are two ways to configure the domain controllers as DNS servers for the peered virtual network:
-* Configure the Azure virtual network DNS servers to use the Microsoft Entra DS domain controllers.
+* Configure the Azure virtual network DNS servers to use the Domain Services domain controllers.
* Configure the existing DNS server in use on the peered virtual network to use conditional DNS forwarding to direct queries to the managed domain. These steps vary depending on the existing DNS server in use.
-In this tutorial, let's configure the Azure virtual network DNS servers to direct all queries to the Microsoft Entra DS domain controllers.
+In this tutorial, let's configure the Azure virtual network DNS servers to direct all queries to the Domain Services domain controllers.
1. In the Microsoft Entra admin center, select the resource group of the peered virtual network, such as *myResourceGroup*. From the list of resources, choose the peered virtual network, such as *myVnet*. 1. In the left-hand menu of the virtual network window, select **DNS servers**.
-1. By default, a virtual network uses the built-in Azure-provided DNS servers. Choose to use **Custom** DNS servers. Enter the IP addresses for the Microsoft Entra DS domain controllers, which are usually *10.0.2.4* and *10.0.2.5*. Confirm these IP addresses on the **Overview** window of your managed domain in the portal.
+1. By default, a virtual network uses the built-in Azure-provided DNS servers. Choose to use **Custom** DNS servers. Enter the IP addresses for the Domain Services domain controllers, which are usually *10.0.2.4* and *10.0.2.5*. Confirm these IP addresses on the **Overview** window of your managed domain in the portal.
- ![Configure the virtual network DNS servers to use the Microsoft Entra DS domain controllers](./media/tutorial-configure-networking/custom-dns.png)
+ ![Configure the virtual network DNS servers to use the Domain Services domain controllers](./media/tutorial-configure-networking/custom-dns.png)
1. When ready, select **Save**. It takes a few moments to update the DNS servers for the virtual network. 1. To apply the updated DNS settings to the VMs, restart VMs connected to the peered virtual network.
When you create a VM that needs to use the managed domain, make sure you select
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Understand the virtual network connectivity options for domain-joined resources to Microsoft Entra DS
-> * Create an IP address range and additional subnet in the Microsoft Entra DS virtual network
-> * Configure virtual network peering to a network that's separate from Microsoft Entra DS
+> * Understand the virtual network connectivity options for domain-joined resources to Domain Services
+> * Create an IP address range and additional subnet in the Domain Services virtual network
+> * Configure virtual network peering to a network that's separate from Domain Services
To see this managed domain in action, create and join a virtual machine to the domain.
active-directory-domain-services Tutorial Configure Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md
Previously updated : 04/03/2023 Last updated : 09/21/2023 #Customer intent: As an server administrator, I want to learn how to enable password hash synchronization with Microsoft Entra Connect to create a hybrid environment using an on-premises AD DS domain.
# Tutorial: Enable password synchronization in Microsoft Entra Domain Services for hybrid environments
-For hybrid environments, a Microsoft Entra tenant can be configured to synchronize with an on-premises Active Directory Domain Services (AD DS) environment using Microsoft Entra Connect. By default, Microsoft Entra Connect doesn't synchronize legacy NT LAN Manager (NTLM) and Kerberos password hashes that are needed for Microsoft Entra Domain Services (Microsoft Entra DS).
+For hybrid environments, a Microsoft Entra tenant can be configured to synchronize with an on-premises Active Directory Domain Services (AD DS) environment using Microsoft Entra Connect. By default, Microsoft Entra Connect doesn't synchronize legacy NT LAN Manager (NTLM) and Kerberos password hashes that are needed for Microsoft Entra Domain Services.
-To use Microsoft Entra DS with accounts synchronized from an on-premises AD DS environment, you need to configure Microsoft Entra Connect to synchronize those password hashes required for NTLM and Kerberos authentication. After Microsoft Entra Connect is configured, an on-premises account creation or password change event also then synchronizes the legacy password hashes to Microsoft Entra ID.
+To use Domain Services with accounts synchronized from an on-premises AD DS environment, you need to configure Microsoft Entra Connect to synchronize those password hashes required for NTLM and Kerberos authentication. After Microsoft Entra Connect is configured, an on-premises account creation or password change event also then synchronizes the legacy password hashes to Microsoft Entra ID.
You don't need to perform these steps if you use cloud-only accounts with no on-premises AD DS environment.
To complete this tutorial, you need the following resources:
Microsoft Entra Connect is used to synchronize objects like user accounts and groups from an on-premises AD DS environment into a Microsoft Entra tenant. As part of the process, password hash synchronization enables accounts to use the same password in the on-premises AD DS environment and Microsoft Entra ID.
-To authenticate users on the managed domain, Microsoft Entra DS needs password hashes in a format that's suitable for NTLM and Kerberos authentication. Microsoft Entra ID doesn't store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Microsoft Entra DS for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
+To authenticate users on the managed domain, Domain Services needs password hashes in a format that's suitable for NTLM and Kerberos authentication. Microsoft Entra ID doesn't store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Domain Services for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
-Microsoft Entra Connect can be configured to synchronize the required NTLM or Kerberos password hashes for Microsoft Entra DS. Make sure that you have completed the steps to [enable Microsoft Entra Connect for password hash synchronization][enable-azure-ad-connect]. If you had an existing instance of Microsoft Entra Connect, [download and update to the latest version][azure-ad-connect-download] to make sure you can synchronize the legacy password hashes for NTLM and Kerberos. This functionality isn't available in early releases of Microsoft Entra Connect or with the legacy DirSync tool. Microsoft Entra Connect version *1.1.614.0* or later is required.
+Microsoft Entra Connect can be configured to synchronize the required NTLM or Kerberos password hashes for Domain Services. Make sure that you have completed the steps to [enable Microsoft Entra Connect for password hash synchronization][enable-azure-ad-connect]. If you had an existing instance of Microsoft Entra Connect, [download and update to the latest version][azure-ad-connect-download] to make sure you can synchronize the legacy password hashes for NTLM and Kerberos. This functionality isn't available in early releases of Microsoft Entra Connect or with the legacy DirSync tool. Microsoft Entra Connect version *1.1.614.0* or later is required.
> [!IMPORTANT]
-> Microsoft Entra Connect should only be installed and configured for synchronization with on-premises AD DS environments. It's not supported to install Microsoft Entra Connect in a Microsoft Entra DS managed domain to synchronize objects back to Microsoft Entra ID.
+> Microsoft Entra Connect should only be installed and configured for synchronization with on-premises AD DS environments. It's not supported to install Microsoft Entra Connect in a Domain Services managed domain to synchronize objects back to Microsoft Entra ID.
## Enable synchronization of password hashes
-With Microsoft Entra Connect installed and configured to synchronize with Microsoft Entra ID, now configure the legacy password hash sync for NTLM and Kerberos. A PowerShell script is used to configure the required settings and then start a full password synchronization to Microsoft Entra ID. When that Microsoft Entra Connect password hash synchronization process is complete, users can sign in to applications through Microsoft Entra DS that use legacy NTLM or Kerberos password hashes.
+With Microsoft Entra Connect installed and configured to synchronize with Microsoft Entra ID, now configure the legacy password hash sync for NTLM and Kerberos. A PowerShell script is used to configure the required settings and then start a full password synchronization to Microsoft Entra ID. When that Microsoft Entra Connect password hash synchronization process is complete, users can sign in to applications through Domain Services that use legacy NTLM or Kerberos password hashes.
1. On the computer with Microsoft Entra Connect installed, from the Start menu, open the **Microsoft Entra Connect > Synchronization Service**. 1. Select the **Connectors** tab. The connection information used to establish the synchronization between the on-premises AD DS environment and Microsoft Entra ID are listed.
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
# Tutorial: Create an outbound forest trust to an on-premises domain in Microsoft Entra Domain Services
-You can create a one-way outbound trust from Microsoft Entra DS to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Microsoft Entra DS managed domain. A forest trust can help users access resources in scenarios such as:
+You can create a one-way outbound trust from Microsoft Entra Domain Services to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Domain Services managed domain. A forest trust can help users access resources in scenarios such as:
- Environments where you can't synchronize password hashes, or where users exclusively sign in using smart cards and don't know their password. - Hybrid scenarios that still require access to on-premises domains.
-Trusts can be created in any domain. The domain will automatically block synchronization from an on-premises domain for any user accounts that were synchronized to Microsoft Entra DS. This prevents UPN collisions when users authenticate.
+Trusts can be created in any domain. The domain will automatically block synchronization from an on-premises domain for any user accounts that were synchronized to Domain Services. This prevents UPN collisions when users authenticate.
-![Diagram of forest trust from Microsoft Entra DS to on-premises AD DS](./media/tutorial-create-forest-trust/forest-trust-relationship.png)
+![Diagram of forest trust from Domain Services to on-premises AD DS](./media/tutorial-create-forest-trust/forest-trust-relationship.png)
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Configure DNS in an on-premises AD DS environment to support Microsoft Entra DS connectivity
+> * Configure DNS in an on-premises AD DS environment to support Domain Services connectivity
> * Create a one-way inbound forest trust in an on-premises AD DS environment
-> * Create a one-way outbound forest trust in Microsoft Entra DS
+> * Create a one-way outbound forest trust in Domain Services
> * Test and validate the trust relationship for authentication and resource access If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
To complete this tutorial, you need the following resources and privileges:
## Sign in to the Microsoft Entra admin center
-In this tutorial, you create and configure the outbound forest trust from Microsoft Entra DS using the Microsoft Entra admin center. To get started, first sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to modify a Microsoft Entra DS instance.
+In this tutorial, you create and configure the outbound forest trust from Domain Services using the Microsoft Entra admin center. To get started, first sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to modify a Domain Services instance.
## Networking considerations
-The virtual network that hosts the Microsoft Entra DS forest needs network connectivity to your on-premises Active Directory. Applications and services also need network connectivity to the virtual network hosting the Microsoft Entra DS forest. Network connectivity to the Microsoft Entra DS forest must be always on and stable otherwise users may fail to authenticate or access resources.
+The virtual network that hosts the Domain Services forest needs network connectivity to your on-premises Active Directory. Applications and services also need network connectivity to the virtual network hosting the Domain Services forest. Network connectivity to the Domain Services forest must be always on and stable otherwise users may fail to authenticate or access resources.
-Before you configure a forest trust in Microsoft Entra DS, make sure your networking between Azure and on-premises environment meets the following requirements:
+Before you configure a forest trust in Domain Services, make sure your networking between Azure and on-premises environment meets the following requirements:
* Use private IP addresses. Don't rely on DHCP with dynamic IP address assignment. * Avoid overlapping IP address spaces to allow virtual network peering and routing to successfully communicate between Azure and on-premises. * An Azure virtual network needs a gateway subnet to configure an [Azure site-to-site (S2S) VPN][vpn-gateway] or [ExpressRoute][expressroute] connection. * Create subnets with enough IP addresses to support your scenario.
-* Make sure Microsoft Entra DS has its own subnet, don't share this virtual network subnet with application VMs and services.
+* Make sure Domain Services has its own subnet, don't share this virtual network subnet with application VMs and services.
* Peered virtual networks are NOT transitive.
- * Azure virtual network peerings must be created between all virtual networks you want to use the Microsoft Entra DS forest trust to the on-premises AD DS environment.
+ * Azure virtual network peerings must be created between all virtual networks you want to use the Domain Services forest trust to the on-premises AD DS environment.
* Provide continuous network connectivity to your on-premises Active Directory forest. Don't use on-demand connections.
-* Make sure there's continuous name resolution (DNS) between your Microsoft Entra DS forest name and your on-premises Active Directory forest name.
+* Make sure there's continuous name resolution (DNS) between your Domain Services forest name and your on-premises Active Directory forest name.
## Configure DNS in the on-premises domain
To configure inbound trust on the on-premises AD DS domain, complete the followi
1. Select **Start** > **Administrative Tools** > **Active Directory Domains and Trusts**. 1. Right-click the domain, such as *onprem.contoso.com*, then select **Properties**. 1. Choose **Trusts** tab, then **New Trust**.
-1. Enter the name for Microsoft Entra DS domain name, such as *aaddscontoso.com*, then select **Next**.
+1. Enter the name for Domain Services domain name, such as *aaddscontoso.com*, then select **Next**.
1. Select the option to create a **Forest trust**, then to create a **One way: incoming** trust. 1. Choose to create the trust for **This domain only**. In the next step, you create the trust in the Microsoft Entra admin center for the managed domain. 1. Choose to use **Forest-wide authentication**, then enter and confirm a trust password. This same password is also entered in the Microsoft Entra admin center in the next section.
If the forest trust is no longer needed for an environment, complete the followi
<a name='create-outbound-forest-trust-in-azure-ad-ds'></a>
-## Create outbound forest trust in Microsoft Entra DS
+## Create outbound forest trust in Domain Services
With the on-premises AD DS domain configured to resolve the managed domain and an inbound forest trust created, now create the outbound forest trust. This outbound forest trust completes the trust relationship between the on-premises AD DS domain and the managed domain.
To create the outbound trust for the managed domain in the Microsoft Entra admin
![Create outbound forest trust in the Microsoft Entra admin center](./media/tutorial-create-forest-trust/portal-create-outbound-trust.png)
-If the forest trust is no longer needed for an environment, complete the following steps to remove it from Microsoft Entra DS:
+If the forest trust is no longer needed for an environment, complete the following steps to remove it from Domain
1. In the Microsoft Entra admin center, search for and select **Microsoft Entra Domain Services**, then select your managed domain, such as *aaddscontoso.com*. 1. From the menu on the left-hand side of the managed domain, select **Trusts**, choose the trust, and click **Remove**.
If the forest trust is no longer needed for an environment, complete the followi
The following common scenarios let you validate that forest trust correctly authenticates users and access to resources:
-* [On-premises user authentication from the Microsoft Entra DS forest](#on-premises-user-authentication-from-the-azure-ad-ds-forest)
-* [Access resources in the Microsoft Entra DS forest using on-premises user](#access-resources-in-the-azure-ad-ds-forest-using-on-premises-user)
+* [On-premises user authentication from the Domain Services forest](#on-premises-user-authentication-from-the-azure-ad-ds-forest)
+* [Access resources in the Domain Services forest using on-premises user](#access-resources-in-the-azure-ad-ds-forest-using-on-premises-user)
* [Enable file and printer sharing](#enable-file-and-printer-sharing) * [Create a security group and add members](#create-a-security-group-and-add-members) * [Create a file share for cross-forest access](#create-a-file-share-for-cross-forest-access)
The following common scenarios let you validate that forest trust correctly auth
<a name='on-premises-user-authentication-from-the-azure-ad-ds-forest'></a>
-### On-premises user authentication from the Microsoft Entra DS forest
+### On-premises user authentication from the Domain Services forest
You should have Windows Server virtual machine joined to the managed domain. Use this virtual machine to test your on-premises user can authenticate on a virtual machine. If needed, [create a Windows VM and join it to the managed domain][join-windows-vm].
-1. Connect to the Windows Server VM joined to the Microsoft Entra DS forest using [Azure Bastion](../bastion/bastion-overview.md) and your Microsoft Entra DS administrator credentials.
+1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](../bastion/bastion-overview.md) and your Domain Services administrator credentials.
1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user: ```console
You should have Windows Server virtual machine joined to the managed domain. Use
<a name='access-resources-in-the-azure-ad-ds-forest-using-on-premises-user'></a>
-### Access resources in the Microsoft Entra DS forest using on-premises user
+### Access resources in the Domain Services forest using on-premises user
-Using the Windows Server VM joined to the Microsoft Entra DS forest, you can test the scenario where users can access resources hosted in the forest when they authenticate from computers in the on-premises domain with users from the on-premises domain. The following examples show you how to create and test various common scenarios.
+Using the Windows Server VM joined to the Domain Services forest, you can test the scenario where users can access resources hosted in the forest when they authenticate from computers in the on-premises domain with users from the on-premises domain. The following examples show you how to create and test various common scenarios.
#### Enable file and printer sharing
-1. Connect to the Windows Server VM joined to the Microsoft Entra DS forest using [Azure Bastion](../bastion/bastion-overview.md) and your Microsoft Entra DS administrator credentials.
+1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](../bastion/bastion-overview.md) and your Domain Services administrator credentials.
1. Open **Windows Settings**, then search for and select **Network and Sharing Center**. 1. Choose the option for **Change advanced sharing** settings.
Using the Windows Server VM joined to the Microsoft Entra DS forest, you can tes
1. Type *Domain Users* in the **Enter the object names to select** box. Select **Check Names**, provide credentials for the on-premises Active Directory, then select **OK**. > [!NOTE]
- > You must provide credentials because the trust relationship is only one way. This means users from the Microsoft Entra DS managed domain can't access resources or search for users or groups in the trusted (on-premises) domain.
+ > You must provide credentials because the trust relationship is only one way. This means users from the Domain Services managed domain can't access resources or search for users or groups in the trusted (on-premises) domain.
1. The **Domain Users** group from your on-premises Active Directory should be a member of the **FileServerAccess** group. Select **OK** to save the group and close the window. #### Create a file share for cross-forest access
-1. On the Windows Server VM joined to the Microsoft Entra DS forest, create a folder and provide name such as *CrossForestShare*.
+1. On the Windows Server VM joined to the Domain Services forest, create a folder and provide name such as *CrossForestShare*.
1. Right-select the folder and choose **Properties**. 1. Select the **Security** tab, then choose **Edit**. 1. In the *Permissions for CrossForestShare* dialog box, select **Add**.
Using the Windows Server VM joined to the Microsoft Entra DS forest, you can tes
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Configure DNS in an on-premises AD DS environment to support Microsoft Entra DS connectivity
+> * Configure DNS in an on-premises AD DS environment to support Domain Services connectivity
> * Create a one-way inbound forest trust in an on-premises AD DS environment
-> * Create a one-way outbound forest trust in Microsoft Entra DS
+> * Create a one-way outbound forest trust in Domain Services
> * Test and validate the trust relationship for authentication and resource access
-For more conceptual information about forest in Microsoft Entra DS, see [How do forest trusts work in Microsoft Entra DS?][concepts-trust].
+For more conceptual information about forest in Domain Services, see [How do forest trusts work in Domain Services?][concepts-trust].
<!-- INTERNAL LINKS --> [concepts-trust]: concepts-forest-trust.md
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
# Tutorial: Create and configure a Microsoft Entra Domain Services managed domain with advanced configuration options
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Microsoft Entra DS integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Domain Services integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
-You can [create a managed domain using default configuration options][tutorial-create-instance] for networking and synchronization, or manually define these settings. This tutorial shows you how to define those advanced configuration options to create and configure a Microsoft Entra DS managed domain using the Microsoft Entra admin center.
+You can [create a managed domain using default configuration options][tutorial-create-instance] for networking and synchronization, or manually define these settings. This tutorial shows you how to define those advanced configuration options to create and configure a Domain Services managed domain using the Microsoft Entra admin center.
In this tutorial, you learn how to:
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Microsoft Entra DS.
-* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Microsoft Entra DS resources.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Domain Services resources.
-Although not required for Microsoft Entra DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
+Although not required for Domain Services, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
> [!IMPORTANT] > After you create a managed domain, you can't move it to a different subscription, resource group, or region. Take care to select the most appropriate subscription, resource group, and region when you deploy the managed domain.
The following DNS name restrictions also apply:
Complete the fields in the *Basics* window of the Microsoft Entra admin center to create a managed domain: 1. Enter a **DNS domain name** for your managed domain, taking into consideration the previous points.
-1. Choose the Azure **Location** in which the managed domain should be created. If you choose a region that supports Availability Zones, the Microsoft Entra DS resources are distributed across zones for additional redundancy.
+1. Choose the Azure **Location** in which the managed domain should be created. If you choose a region that supports Availability Zones, the Domain Services resources are distributed across zones for additional redundancy.
> [!TIP] > Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. >
- > There's nothing for you to configure for Microsoft Entra DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones]
+ > There's nothing for you to configure for Domain Services to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones]
-1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Microsoft Entra DS SKU concepts][concepts-sku].
+1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Domain Services SKU concepts][concepts-sku].
For this tutorial, select the *Standard* SKU. 1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains.
Complete the fields in the *Basics* window of the Microsoft Entra admin center t
## Create and configure the virtual network
-To provide connectivity, an Azure virtual network and a dedicated subnet are needed. Microsoft Entra DS is enabled in this virtual network subnet. In this tutorial, you create a virtual network, though you could instead choose to use an existing virtual network. In either approach, you must create a dedicated subnet for use by Microsoft Entra DS.
+To provide connectivity, an Azure virtual network and a dedicated subnet are needed. Domain Services is enabled in this virtual network subnet. In this tutorial, you create a virtual network, though you could instead choose to use an existing virtual network. In either approach, you must create a dedicated subnet for use by Domain Services.
Some considerations for this dedicated virtual network subnet include the following areas:
-* The subnet must have at least 3-5 available IP addresses in its address range to support the Microsoft Entra DS resources.
-* Don't select the *Gateway* subnet for deploying Microsoft Entra DS. It's not supported to deploy Microsoft Entra DS into a *Gateway* subnet.
+* The subnet must have at least 3-5 available IP addresses in its address range to support the Domain Services resources.
+* Don't select the *Gateway* subnet for deploying Domain Services. It's not supported to deploy Domain Services into a *Gateway* subnet.
* Don't deploy any other virtual machines to the subnet. Applications and VMs often use network security groups to secure connectivity. Running these workloads in a separate subnet lets you apply those network security groups without disrupting connectivity to your managed domain. For more information on how to plan and configure the virtual network, see [networking considerations for Microsoft Entra Domain Services][network-considerations]. Complete the fields in the *Network* window as follows:
-1. On the **Network** page, choose a virtual network to deploy Microsoft Entra DS into from the drop-down menu, or select **Create new**.
+1. On the **Network** page, choose a virtual network to deploy Domain Services into from the drop-down menu, or select **Create new**.
1. If you choose to create a virtual network, enter a name for the virtual network, such as *myVnet*, then provide an address range, such as *10.0.1.0/24*. 1. Create a dedicated subnet with a clear name, such as *DomainServices*. Provide an address range, such as *10.0.1.0/24*. [ ![Create a virtual network and subnet for use with Microsoft Entra Domain Services](./media/tutorial-create-instance-advanced/create-vnet.png)](./media/tutorial-create-instance-advanced/create-vnet-expanded.png#lightbox)
- Make sure to pick an address range that is within your private IP address range. IP address ranges you don't own that are in the public address space cause errors within Microsoft Entra DS.
+ Make sure to pick an address range that is within your private IP address range. IP address ranges you don't own that are in the public address space cause errors within Domain Services.
1. Select a virtual network subnet, such as *DomainServices*. 1. When ready, choose **Next - Administration**. ## Configure an administrative group
-A special administrative group named *AAD DC Administrators* is used for management of the Microsoft Entra DS domain. Members of this group are granted administrative permissions on VMs that are domain-joined to the managed domain. On domain-joined VMs, this group is added to the local administrators group. Members of this group can also use Remote Desktop to connect remotely to domain-joined VMs.
+A special administrative group named *AAD DC Administrators* is used for management of the Domain Services domain. Members of this group are granted administrative permissions on VMs that are domain-joined to the managed domain. On domain-joined VMs, this group is added to the local administrators group. Members of this group can also use Remote Desktop to connect remotely to domain-joined VMs.
> [!IMPORTANT]
-> You don't have *Domain Administrator* or *Enterprise Administrator* permissions on a managed domain using Microsoft Entra DS. These permissions are reserved by the service and aren't made available to users within the tenant.
+> You don't have *Domain Administrator* or *Enterprise Administrator* permissions on a managed domain using Domain Services. These permissions are reserved by the service and aren't made available to users within the tenant.
> > Instead, the *AAD DC Administrators* group lets you perform some privileged operations. These operations include belonging to the administration group on domain-joined VMs, and configuring Group Policy.
The wizard automatically creates the *AAD DC Administrators* group in your Micro
## Configure synchronization
-Microsoft Entra DS lets you synchronize *all* users and groups available in Microsoft Entra ID, or a *scoped* synchronization of only specific groups. You can change the synchronize scope now, or once the managed domain is deployed. For more information, see [Microsoft Entra Domain Services scoped synchronization][scoped-sync].
+Domain Services lets you synchronize *all* users and groups available in Microsoft Entra ID, or a *scoped* synchronization of only specific groups. You can change the synchronize scope now, or once the managed domain is deployed. For more information, see [Microsoft Entra Domain Services scoped synchronization][scoped-sync].
1. For this tutorial, choose to synchronize **All** users and groups. This synchronization choice is the default option.
Microsoft Entra DS lets you synchronize *all* users and groups available in Micr
On the **Summary** page of the wizard, review the configuration settings for your managed domain. You can go back to any step of the wizard to make changes. To redeploy a managed domain to a different Microsoft Entra tenant in a consistent way using these configuration options, you can also **Download a template for automation**.
-1. To create the managed domain, select **Create**. A note is displayed that certain configuration options like DNS name or virtual network can't be changed once the Microsoft Entra DS managed has been created. To continue, select **OK**.
-1. The process of provisioning your managed domain can take up to an hour. A notification is displayed in the portal that shows the progress of your Microsoft Entra DS deployment. Select the notification to see detailed progress for the deployment.
+1. To create the managed domain, select **Create**. A note is displayed that certain configuration options like DNS name or virtual network can't be changed once the Domain Services managed has been created. To continue, select **OK**.
+1. The process of provisioning your managed domain can take up to an hour. A notification is displayed in the portal that shows the progress of your Domain Services deployment. Select the notification to see detailed progress for the deployment.
![Notification in the Microsoft Entra admin center of the deployment in progress](./media/tutorial-create-instance-advanced/deployment-in-progress.png)
On the **Summary** page of the wizard, review the configuration settings for you
![Domain Services status once successfully provisioned](./media/tutorial-create-instance-advanced/successfully-provisioned.png) > [!IMPORTANT]
-> The managed domain is associated with your Microsoft Entra tenant. During the provisioning process, Microsoft Entra DS creates two Enterprise Applications named *Domain Controller Services* and *AzureActiveDirectoryDomainControllerServices* in the Microsoft Entra tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these applications.
+> The managed domain is associated with your Microsoft Entra tenant. During the provisioning process, Domain Services creates two Enterprise Applications named *Domain Controller Services* and *AzureActiveDirectoryDomainControllerServices* in the Microsoft Entra tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these applications.
## Update DNS settings for the Azure virtual network
-With Microsoft Entra DS successfully deployed, now configure the virtual network to allow other connected VMs and applications to use the managed domain. To provide this connectivity, update the DNS server settings for your virtual network to point to the two IP addresses where the managed domain is deployed.
+With Domain Services successfully deployed, now configure the virtual network to allow other connected VMs and applications to use the managed domain. To provide this connectivity, update the DNS server settings for your virtual network to point to the two IP addresses where the managed domain is deployed.
1. The **Overview** tab for your managed domain shows some **Required configuration steps**. The first configuration step is to update DNS server settings for your virtual network. Once the DNS settings are correctly configured, this step is no longer shown.
With Microsoft Entra DS successfully deployed, now configure the virtual network
<a name='enable-user-accounts-for-azure-ad-ds'></a>
-## Enable user accounts for Microsoft Entra DS
+## Enable user accounts for Domain Services
-To authenticate users on the managed domain, Microsoft Entra DS needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Microsoft Entra DS for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
+To authenticate users on the managed domain, Domain Services needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Domain Services for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
> [!NOTE] > Once appropriately configured, the usable password hashes are stored in the managed domain. If you delete the managed domain, any password hashes stored at that point are also deleted. > > Synchronized credential information in Microsoft Entra ID can't be re-used if you later create a managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Microsoft Entra ID needs to generate and store the password hashes in the new managed domain. >
-> For more information, see [Password hash sync process for Microsoft Entra DS and Microsoft Entra Connect][password-hash-sync-process].
+> For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][password-hash-sync-process].
The steps to generate and store these password hashes are different for cloud-only user accounts created in Microsoft Entra ID versus user accounts that are synchronized from your on-premises directory using Microsoft Entra Connect.
In this tutorial, let's work with a basic cloud-only user account. For more info
> [!TIP] > If your Microsoft Entra tenant has a combination of cloud-only users and users from your on-premises AD, you need to complete both sets of steps.
-For cloud-only user accounts, users must change their passwords before they can use Microsoft Entra DS. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The account isn't synchronized from Microsoft Entra ID to Microsoft Entra DS until the password is changed. Either expire the passwords for all cloud users in the tenant who need to use Microsoft Entra DS, which forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this tutorial, let's manually change a user password.
+For cloud-only user accounts, users must change their passwords before they can use Domain Services. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The account isn't synchronized from Microsoft Entra ID to Domain Services until the password is changed. Either expire the passwords for all cloud users in the tenant who need to use Domain Services, which forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this tutorial, let's manually change a user password.
Before a user can reset their password, the Microsoft Entra tenant must be [configured for self-service password reset][configure-sspr].
To change the password for a cloud-only user, the user must complete the followi
1. On the **Change password** page, enter your existing (old) password, then enter and confirm a new password. 1. Select **Submit**.
-It takes a few minutes after you've changed your password for the new password to be usable in Microsoft Entra DS and to successfully sign in to computers joined to the managed domain.
+It takes a few minutes after you've changed your password for the new password to be usable in Domain Services and to successfully sign in to computers joined to the managed domain.
## Next steps
In this tutorial, you learned how to:
> * Configure DNS and virtual network settings for a managed domain > * Create a managed domain > * Add administrative users to domain management
-> * Enable user accounts for Microsoft Entra DS and generate password hashes
+> * Enable user accounts for Domain Services and generate password hashes
To see this managed domain in action, create and join a virtual machine to the domain.
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
# Tutorial: Create and configure a Microsoft Entra Domain Services managed domain
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Microsoft Entra DS integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Domain Services integrates with your existing Microsoft Entra tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
-You can create a managed domain using default configuration options for networking and synchronization, or [manually define these settings][tutorial-create-instance-advanced]. This tutorial shows you how to use default options to create and configure a Microsoft Entra DS managed domain using the Microsoft Entra admin center.
+You can create a managed domain using default configuration options for networking and synchronization, or [manually define these settings][tutorial-create-instance-advanced]. This tutorial shows you how to use default options to create and configure a Domain Services managed domain using the Microsoft Entra admin center.
In this tutorial, you learn how to:
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Microsoft Entra DS.
-* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Microsoft Entra DS resources.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Domain Services resources.
* A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain.
-Although not required for Microsoft Entra DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
+Although not required for Domain Services, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
> [!IMPORTANT] > You can't move the managed domain to a different subscription, resource group, or region after you create it. Take care to select the most appropriate subscription, resource group, and region when you deploy the managed domain.
The following DNS name restrictions also apply:
Complete the fields in the *Basics* window of the Microsoft Entra admin center to create a managed domain: 1. Enter a **DNS domain name** for your managed domain, taking into consideration the previous points.
-1. Choose the Azure **Location** in which the managed domain should be created. If you choose a region that supports Azure Availability Zones, the Microsoft Entra DS resources are distributed across zones for additional redundancy.
+1. Choose the Azure **Location** in which the managed domain should be created. If you choose a region that supports Azure Availability Zones, the Domain Services resources are distributed across zones for additional redundancy.
> [!TIP] > Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. >
- > There's nothing for you to configure for Microsoft Entra DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones]
+ > There's nothing for you to configure for Domain Services to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones]
-1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Microsoft Entra DS SKU concepts][concepts-sku].
+1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Domain Services SKU concepts][concepts-sku].
For this tutorial, select the *Standard* SKU. 1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains.
Select **Review + create** to accept these default configuration options.
On the **Summary** page of the wizard, review the configuration settings for your managed domain. You can go back to any step of the wizard to make changes. To redeploy a managed domain to a different Microsoft Entra tenant in a consistent way using these configuration options, you can also **Download a template for automation**.
-1. To create the managed domain, select **Create**. A note is displayed that certain configuration options such as DNS name or virtual network can't be changed once the Microsoft Entra DS managed has been created. To continue, select **OK**.
-1. The process of provisioning your managed domain can take up to an hour. A notification is displayed in the portal that shows the progress of your Microsoft Entra DS deployment. Select the notification to see detailed progress for the deployment.
+1. To create the managed domain, select **Create**. A note is displayed that certain configuration options such as DNS name or virtual network can't be changed once the Domain Services managed has been created. To continue, select **OK**.
+1. The process of provisioning your managed domain can take up to an hour. A notification is displayed in the portal that shows the progress of your Domain Services deployment. Select the notification to see detailed progress for the deployment.
![Notification in the Microsoft Entra admin center of the deployment in progress](./media/tutorial-create-instance/deployment-in-progress.png)
On the **Summary** page of the wizard, review the configuration settings for you
![Domain Services status once successfully provisioned](./media/tutorial-create-instance/successfully-provisioned.png) > [!IMPORTANT]
-> The managed domain is associated with your Microsoft Entra tenant. During the provisioning process, Microsoft Entra DS creates two Enterprise Applications named *Domain Controller Services* and *AzureActiveDirectoryDomainControllerServices* in the Microsoft Entra tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these applications.
+> The managed domain is associated with your Microsoft Entra tenant. During the provisioning process, Domain Services creates two Enterprise Applications named *Domain Controller Services* and *AzureActiveDirectoryDomainControllerServices* in the Microsoft Entra tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these applications.
## Update DNS settings for the Azure virtual network
-With Microsoft Entra DS successfully deployed, now configure the virtual network to allow other connected VMs and applications to use the managed domain. To provide this connectivity, update the DNS server settings for your virtual network to point to the two IP addresses where the managed domain is deployed.
+With Domain Services successfully deployed, now configure the virtual network to allow other connected VMs and applications to use the managed domain. To provide this connectivity, update the DNS server settings for your virtual network to point to the two IP addresses where the managed domain is deployed.
1. The **Overview** tab for your managed domain shows some **Required configuration steps**. The first configuration step is to update DNS server settings for your virtual network. Once the DNS settings are correctly configured, this step is no longer shown.
With Microsoft Entra DS successfully deployed, now configure the virtual network
<a name='enable-user-accounts-for-azure-ad-ds'></a>
-## Enable user accounts for Microsoft Entra DS
+## Enable user accounts for Domain Services
-To authenticate users on the managed domain, Microsoft Entra DS needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Microsoft Entra DS for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
+To authenticate users on the managed domain, Domain Services needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Microsoft Entra ID doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Domain Services for your tenant. For security reasons, Microsoft Entra ID also doesn't store any password credentials in clear-text form. Therefore, Microsoft Entra ID can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
> [!NOTE] > Once appropriately configured, the usable password hashes are stored in the managed domain. If you delete the managed domain, any password hashes stored at that point are also deleted. > > Synchronized credential information in Microsoft Entra ID can't be re-used if you later create a managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Microsoft Entra ID needs to generate and store the password hashes in the new managed domain. >
-> [Microsoft Entra Connect Cloud Sync is not supported with Microsoft Entra DS](../active-directory/cloud-sync/what-is-cloud-sync.md#comparison-between-azure-ad-connect-and-cloud-sync). On-premises users need to be synced using Microsoft Entra Connect in order to be able to access domain-joined VMs. For more information, see [Password hash sync process for Microsoft Entra DS and Microsoft Entra Connect][password-hash-sync-process].
+> [Microsoft Entra Connect Cloud Sync is not supported with Domain Services](../active-directory/cloud-sync/what-is-cloud-sync.md#comparison-between-azure-ad-connect-and-cloud-sync). On-premises users need to be synced using Microsoft Entra Connect in order to be able to access domain-joined VMs. For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][password-hash-sync-process].
The steps to generate and store these password hashes are different for cloud-only user accounts created in Microsoft Entra ID versus user accounts that are synchronized from your on-premises directory using Microsoft Entra Connect.
A cloud-only user account is an account that was created in your Microsoft Entra
> [!TIP] > If your Microsoft Entra tenant has a combination of cloud-only users and users from your on-premises AD, you need to complete both sets of steps.
-For cloud-only user accounts, users must change their passwords before they can use Microsoft Entra DS. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The account isn't synchronized from Microsoft Entra ID to Microsoft Entra DS until the password is changed. Either expire the passwords for all cloud users in the tenant who need to use Microsoft Entra DS, which forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this tutorial, let's manually change a user password.
+For cloud-only user accounts, users must change their passwords before they can use Domain Services. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Microsoft Entra ID. The account isn't synchronized from Microsoft Entra ID to Domain Services until the password is changed. Either expire the passwords for all cloud users in the tenant who need to use Domain Services, which forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this tutorial, let's manually change a user password.
Before a user can reset their password, the Microsoft Entra tenant must be [configured for self-service password reset][configure-sspr].
To change the password for a cloud-only user, the user must complete the followi
1. On the **Change password** page, enter your existing (old) password, then enter and confirm a new password. 1. Select **Submit**.
-It takes a few minutes after you've changed your password for the new password to be usable in Microsoft Entra DS and to successfully sign in to computers joined to the managed domain.
+It takes a few minutes after you've changed your password for the new password to be usable in Domain Services and to successfully sign in to computers joined to the managed domain.
## Next steps
In this tutorial, you learned how to:
> * Understand DNS requirements for a managed domain > * Create a managed domain > * Add administrative users to domain management
-> * Enable user accounts for Microsoft Entra DS and generate password hashes
+> * Enable user accounts for Domain Services and generate password hashes
Before you domain-join VMs and deploy applications that use the managed domain, configure an Azure virtual network for application workloads.
active-directory-domain-services Tutorial Create Management Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-management-vm.md
# Tutorial: Create a management VM to configure and administer a Microsoft Entra Domain Services managed domain
-Microsoft Entra Domain Services (Microsoft Entra DS) provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You administer this managed domain using the same Remote Server Administration Tools (RSAT) as with an on-premises Active Directory Domain Services domain. As Microsoft Entra DS is a managed service, there are some administrative tasks that you can't perform, such as using remote desktop protocol (RDP) to connect to the domain controllers.
+Microsoft Entra Domain Services provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You administer this managed domain using the same Remote Server Administration Tools (RSAT) as with an on-premises Active Directory Domain Services domain. As Domain Services is a managed service, there are some administrative tasks that you can't perform, such as using remote desktop protocol (RDP) to connect to the domain controllers.
-This tutorial shows you how to configure a Windows Server VM in Azure and install the required tools to administer a Microsoft Entra DS managed domain.
+This tutorial shows you how to configure a Windows Server VM in Azure and install the required tools to administer a Domain Services managed domain.
In this tutorial, you learn how to:
To complete this tutorial, you need the following resources and privileges:
* A Windows Server VM that is joined to the managed domain. * If needed, see the previous tutorial to [create a Windows Server VM and join it to a managed domain][create-join-windows-vm]. * A user account that's a member of the *Microsoft Entra DC administrators* group in your Microsoft Entra tenant.
-* An Azure Bastion host deployed in your Microsoft Entra DS virtual network.
+* An Azure Bastion host deployed in your Domain Services virtual network.
* If needed, [create an Azure Bastion host][azure-bastion]. ## Sign in to the Microsoft Entra admin center
In this tutorial, you create and configure a management VM using the Microsoft E
<a name='available-administrative-tasks-in-azure-ad-ds'></a>
-## Available administrative tasks in Microsoft Entra DS
+## Available administrative tasks in Domain Services
-Microsoft Entra DS provides a managed domain for your users, applications, and services to consume. This approach changes some of the available management tasks you can do, and what privileges you have within the managed domain. These tasks and permissions may be different than what you experience with a regular on-premises Active Directory Domain Services environment. You also can't connect to domain controllers on the managed domain using Remote Desktop.
+Domain Services provides a managed domain for your users, applications, and services to consume. This approach changes some of the available management tasks you can do, and what privileges you have within the managed domain. These tasks and permissions may be different than what you experience with a regular on-premises Active Directory Domain Services environment. You also can't connect to domain controllers on the managed domain using Remote Desktop.
### Administrative tasks you can perform on a managed domain
With the administrative tools installed, let's see how to use them to administer
In the following example output, a user account named *Contoso Admin* and a group for *AAD DC Administrators* are shown in this container.
- ![View the list of Microsoft Entra DS domain users in the Active Directory Administrative Center](./media/tutorial-create-management-vm/list-azure-ad-users.png)
+ ![View the list of Domain Services domain users in the Active Directory Administrative Center](./media/tutorial-create-management-vm/list-azure-ad-users.png)
1. To see the computers that are joined to the managed domain, select the **AADDC Computers** container. An entry for the current virtual machine, such as *myVM*, is listed. Computer accounts for all devices that are joined to the managed domain are stored in this *AADDC Computers* container.
-Common Active Directory Administrative Center actions such as resetting a user account password or managing group membership are available. These actions only work for users and groups created directly in the managed domain. Identity information only synchronizes *from* Microsoft Entra ID to Microsoft Entra DS. There's no write back from Microsoft Entra DS to Microsoft Entra ID. You can't change passwords or managed group membership for users synchronized from Microsoft Entra ID and have those changes synchronized back.
+Common Active Directory Administrative Center actions such as resetting a user account password or managing group membership are available. These actions only work for users and groups created directly in the managed domain. Identity information only synchronizes *from* Microsoft Entra ID to Domain Services. There's no write back from Domain Services to Microsoft Entra ID. You can't change passwords or managed group membership for users synchronized from Microsoft Entra ID and have those changes synchronized back.
You can also use the *Active Directory Module for Windows PowerShell*, installed as part of the administrative tools, to manage common actions in your managed domain.
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md
# Tutorial: Create and use replica sets for resiliency or geolocation in Microsoft Entra Domain Services
-To improve the resiliency of a Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, or deploy to additional geographic locations close to your applications, you can use *replica sets*. Every Microsoft Entra DS managed domain namespace, such as *aaddscontoso.com*, contains one initial replica set. The ability to create additional replica sets in other Azure regions provides geographical resiliency for a managed domain.
+To improve the resiliency of a Microsoft Entra Domain Services managed domain, or deploy to additional geographic locations close to your applications, you can use *replica sets*. Every Domain Services managed domain namespace, such as *aaddscontoso.com*, contains one initial replica set. The ability to create additional replica sets in other Azure regions provides geographical resiliency for a managed domain.
-You can add a replica set to any peered virtual network in any Azure region that supports Microsoft Entra DS.
+You can add a replica set to any peered virtual network in any Azure region that supports Domain Services.
In this tutorial, you learn how to:
In this tutorial, you create and manage replica sets using the Microsoft Entra a
## Networking considerations
-The virtual networks that host replica sets must be able to communicate with each other. Applications and services that depend on Microsoft Entra DS also need network connectivity to the virtual networks hosting the replica sets. Azure virtual network peering should be configured between all virtual networks to create a fully meshed network. These peerings enable effective intra-site replication between replica sets.
+The virtual networks that host replica sets must be able to communicate with each other. Applications and services that depend on Domain Services also need network connectivity to the virtual networks hosting the replica sets. Azure virtual network peering should be configured between all virtual networks to create a fully meshed network. These peerings enable effective intra-site replication between replica sets.
-Before you can use replica sets in Microsoft Entra DS, review the following Azure virtual network requirements:
+Before you can use replica sets in Domain Services, review the following Azure virtual network requirements:
* Avoid overlapping IP address spaces to allow for virtual network peering and routing. * Create subnets with enough IP addresses to support your scenario.
-* Make sure Microsoft Entra DS has its own subnet. Don't share this virtual network subnet with application VMs and services.
+* Make sure Domain Services has its own subnet. Don't share this virtual network subnet with application VMs and services.
* Peered virtual networks are NOT transitive. > [!TIP]
Before you can use replica sets in Microsoft Entra DS, review the following Azur
## Create a replica set
-When you create a managed domain, such as *aaddscontoso.com*, an initial replica set is created. Additional replica sets share the same namespace and configuration. Changes to Microsoft Entra DS, including configuration, user identity and credentials, groups, group policy objects, computer objects, and other changes are applied to all replica sets in the managed domain using AD DS replication.
+When you create a managed domain, such as *aaddscontoso.com*, an initial replica set is created. Additional replica sets share the same namespace and configuration. Changes to Domain Services, including configuration, user identity and credentials, groups, group policy objects, computer objects, and other changes are applied to all replica sets in the managed domain using AD DS replication.
-In this tutorial, you create an additional replica set in an Azure region different than the initial Microsoft Entra DS replica set.
+In this tutorial, you create an additional replica set in an Azure region different than the initial Domain Services replica set.
To create an additional replica set, complete the following steps:
To create an additional replica set, complete the following steps:
1. In the *Add a replica set* window, select the destination region, such as *East US*.
- Select a virtual network in the destination region, such as *vnet-eastus*, then choose a subnet such as *aadds-subnet*. If needed, choose **Create new** to add a virtual network in the destination region, then **Manage** to create a subnet for Microsoft Entra DS.
+ Select a virtual network in the destination region, such as *vnet-eastus*, then choose a subnet such as *aadds-subnet*. If needed, choose **Create new** to add a virtual network in the destination region, then **Manage** to create a subnet for Domain Services.
If they don't already exist, the Azure virtual network peerings are automatically created between your existing managed domain's virtual network and the destination virtual network.
To delete a replica set, complete the following steps:
1. Choose your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side, select **Replica sets**. From the list of replica sets, select the **...** context menu next to the replica set you want to delete. 1. Select **Delete** from the context menu, then confirm you want to delete the replica set.
-1. In the Microsoft Entra DS management VM, access the DNS console and manually delete DNS records for the domain controllers from the deleted replica set.
+1. In the Domain Services management VM, access the DNS console and manually delete DNS records for the domain controllers from the deleted replica set.
> [!NOTE] > Replica set deletion may be a time-consuming operation.
In this tutorial, you learned how to:
> * Create a replica set in a different geographic region > * Delete a replica set
-For more conceptual information, learn how replica sets work in Microsoft Entra DS.
+For more conceptual information, learn how replica sets work in Domain Services.
> [!div class="nextstepaction"] > [Replica sets concepts and features][concepts-replica-sets]
active-directory-domain-services Tutorial Perform Disaster Recovery Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-perform-disaster-recovery-drill.md
Previously updated : 06/16/2022 Last updated : 09/21/2023 #Customer intent: As an identity administrator, I want to perform a disaster recovery drill by using replica sets in Microsoft Entra Domain Services to demonstrate resiliency for geographically distributed domain data.
# Tutorial: Perform a disaster recovery drill using replica sets in Microsoft Entra Domain Services
-This topic shows how to perform a disaster recovery (DR) drill for Microsoft Entra Domain Services (Microsoft Entra DS) using replica sets. This will simulate one of the replica sets going offline by making changes to the network virtual network properties to block client access to it. It is not a true DR drill in that the replica set will not be taken offline.
+This topic shows how to perform a disaster recovery (DR) drill for Microsoft Entra Domain Services using replica sets. This excercise simulates one of the replica sets going offline by making changes to the network virtual network properties to block client access to it. It's not a true DR drill in that the replica set isn't taken offline.
-The DR drill will cover:
+The DR drill covers:
1. A client machine is connected to a given replica set. It can authenticate to the domain and perform LDAP queries.
-1. The clientΓÇÖs connection to the replica set will be terminated. This will happen by restricting network access.
-1. The client will then establish a new connection with the other replica set. Once that happens, the client will be able to authenticate to the domain and perform LDAP queries.
-1. The domain member will be rebooted, and a domain user will be able to log in post reboot.
-1. The network restrictions will be removed, and the client will be able to connect to original replica set.
+1. The client's connection to the replica set will be terminated. This will happen by restricting network access.
+1. The client then establishes a new connection with the other replica set. Once that happens, the client is able to authenticate to the domain and perform LDAP queries.
+1. The domain member will be rebooted, and a domain user can sign in after reboot.
+1. The network restrictions are removed, and the client can connect to original replica set.
## Prerequisites The following requirements must be in place to complete the DR drill: -- An active Microsoft Entra DS instance deployed with at least one extra replica set in place. The domain must be in a healthy state. -- A client machine that is joined to the Microsoft Entra DS hosted domain. The client must be in its own virtual network, virtual network peering enabled with both replica set virtual networks, and the virtual network must have the IP addresses of all domain controllers in the replica sets listed in DNS.
+- An active Domain Services instance deployed with at least one extra replica set in place. The domain must be in a healthy state.
+- A client machine that's joined to the Domain Services hosted domain. The client must be in its own virtual network, virtual network peering enabled with both replica set virtual networks, and the virtual network must have the IP addresses of all domain controllers in the replica sets listed in DNS.
## Environment validation
The following requirements must be in place to complete the DR drill:
## Perform the disaster recovery drill
-You will be performing these operations for each replica set in the Microsoft Entra DS instance. This will simulate an outage for each replica set. When domain controllers are not reachable, the client will automatically fail over to a reachable domain controller and this experience should be seamless to the end user or workload. Therefore it is critical that applications and services don't point to a specific domain controller.
+You need to perform these operations for each replica set in the Domain Services instance. The operations simulate an outage for each replica set. When domain controllers aren't reachable, the client automatically fails over to a reachable domain controller. This experience should be seamless to the end user or workload. Therefore, it's critical that applications and services don't point to a specific domain controller.
1. Identify the domain controllers in the replica set that you want to simulate going offline. 1. On the client machine, connect to one of the domain controllers using `nltest /sc_reset:[domain]\[domain controller name]`.
You will be performing these operations for each replica set in the Microsoft En
1. In the Azure portal, go to the client virtual network peering and update the properties so that all traffic is unblocked. This reverts the changes that were made in step 3. 1. On the client machine, attempt to reestablish a secure connection with the domain controllers from step 2 using the same nltest command. These operations should succeed as network connectivity has been unblocked.
-These operations demonstrate that the domain is still available even though one of the replica sets is unreachable by the client. Perform this set of steps for each replica set in the Microsoft Entra DS instance.
+These operations demonstrate that the domain is still available even though one of the replica sets is unreachable by the client. Perform this set of steps for each replica set in the Domain Services instance.
## Summary
-After you complete these steps, you will see domain members continue to access the directory if one of the replica sets in the Microsoft Entra DS is not reachable. You can simulate the same behavior by blocking all network access for a replica set instead of a client machine, but we don't recommend it. It wonΓÇÖt change the behavior from a client perspective, but it will impact the health of your Microsoft Entra DS instance until the network access is restored.
+After you complete these steps, you see domain members continue to access the directory if one of the replica sets in the Domain Services isn't reachable. You can simulate the same behavior by blocking all network access for a replica set instead of a client machine, but we don't recommend it. It won't change the behavior from a client perspective, but it impacts the health of your Domain Services instance until the network access is restored.
## Next steps
In this tutorial, you learned how to:
> * Block network traffic between the client and the replica set > * Validate client connectivity to domain controllers in another replica set
-For more conceptual information, learn how replica sets work in Microsoft Entra DS.
+For more conceptual information, learn how replica sets work in Domain Services.
> [!div class="nextstepaction"] > [Replica sets concepts and features][concepts-replica-sets]
active-directory-domain-services Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md
Previously updated : 06/16/2022 Last updated : 09/21/2023 # Review security audit events in Microsoft Entra Domain Services using Azure Monitor Workbooks
-To help you understand the state of your Microsoft Entra Domain Services (Microsoft Entra DS) managed domain, you can enable security audit events. These security audit events can then be reviewed using Azure Monitor Workbooks that combine text, analytics queries, and parameters into rich interactive reports. Microsoft Entra DS includes workbook templates for security overview and account activity that let you dig into audit events and manage your environment.
+To help you understand the state of your Microsoft Entra Domain Services managed domain, you can enable security audit events. These security audit events can then be reviewed using Azure Monitor Workbooks that combine text, analytics queries, and parameters into rich interactive reports. Domain Services includes workbook templates for security overview and account activity that let you dig into audit events and manage your environment.
-This article shows you how to use Azure Monitor Workbooks to review security audit events in Microsoft Entra DS.
+This article shows you how to use Azure Monitor Workbooks to review security audit events in Domain Services.
## Before you begin
To complete this article, you need the following resources and privileges:
* A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance]. * Security audit events enabled for your managed domain that stream data to a Log Analytics workspace.
- * If needed, [enable security audits for Microsoft Entra DS][enable-security-audits].
+ * If needed, [enable security audits for Domain Services][enable-security-audits].
## Azure Monitor Workbooks overview
-When security audit events are turned on in Microsoft Entra DS, it can be hard to analyze and identify issues in the managed domain. Azure Monitor lets you aggregate these security audit events and query the data. With Azure Monitor Workbooks, you can visualize this data to make it quicker and easier to identify issues.
+When security audit events are turned on in Domain Services, it can be hard to analyze and identify issues in the managed domain. Azure Monitor lets you aggregate these security audit events and query the data. With Azure Monitor Workbooks, you can visualize this data to make it quicker and easier to identify issues.
Workbook templates are curated reports that are designed for flexible reuse by multiple users and teams. When you open a workbook template, the data from your Azure Monitor environment is loaded. You can use templates without an impact on other users in your organization, and can save your own workbooks based on the template.
-Microsoft Entra DS includes the following two workbook templates:
+Domain Services includes the following two workbook templates:
* Security overview report * Account activity report
As with the security overview report, you can drill down into the different tile
## Save and edit workbooks
-The two template workbooks provided by Microsoft Entra DS are a good place to start with your own data analysis. If you need to get more granular in the data queries and investigations, you can save your own workbooks and edit the queries.
+The two template workbooks provided by Domain Services are a good place to start with your own data analysis. If you need to get more granular in the data queries and investigations, you can save your own workbooks and edit the queries.
1. To save a copy of one of the workbook templates, select **Edit > Save as > Shared reports**, then provide a name and save it. 1. From your own copy of the template, select **Edit** to enter the edit mode. You can choose the blue **Edit** button next to any part of the report and change it.
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Some common motivations for using automatic provisioning include:
- Easily importing a large number of users into a particular SaaS application or system. - A single set of policies to determine provisioned users that can sign in to an app.
-Microsoft Entra user provisioning can help address these challenges. To learn more about how customers have been using Microsoft Entra user provisioning, read the [ASOS case study](https://aka.ms/asoscasestudy). The following video provides an overview of user provisioning in Microsoft Entra ID.
+Microsoft Entra user provisioning can help address these challenges. To learn more about how customers have been using Microsoft Entra user provisioning, read the [ASOS case study](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/asos-better-protects-its-data-with-azure-ad-automated-user/ba-p/827846). The following video provides an overview of user provisioning in Microsoft Entra ID.
> [!VIDEO https://www.youtube.com/embed/_ZjARPpI6NI]
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
if (authResult != null)
//Use the Access Token to access the Proxy Application HttpClient httpClient = new HttpClient();
- HttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken);
+ httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken);
HttpResponseMessage response = await httpClient.GetAsync("<Proxy App Url>"); } ```
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 09/15/2023 Last updated : 09/21/2023
You can nudge users to set up Microsoft Authenticator during sign-in. Users go through their regular sign-in, perform multifactor authentication as usual, and then get prompted to set up Microsoft Authenticator. You can include or exclude users or groups to control who gets nudged to set up the app. This allows targeted campaigns to move users from less secure authentication methods to Authenticator.
-You can also define how many days a user can postpone, or "snooze," the nudge. If a user taps **Not now** to postpone the app setup, they get nudged again on the next MFA attempt after the snooze duration has elapsed. Users with free and trial subscriptions can postpone the app setup up to three times.
+You can also define how many days a user can postpone, or "snooze," the nudge. If a user taps **Skip for now** to postpone the app setup, they get nudged again on the next MFA attempt after the snooze duration has elapsed. You can decide whether the user can snooze indefinitely or up to three times (after which registration is required).
>[!NOTE] >As users go through their regular sign-in, Conditional Access policies that govern security info registration apply before the user is prompted to set up Authenticator. For example, if a Conditional Access policy requires security info updates can only occur on an internal network, then users won't be prompted to set up Authenticator unless they are on the internal network.
You can also define how many days a user can postpone, or "snooze," the nudge. I
1. User sees prompt to set up the Authenticator app to improve their sign-in experience. Only users who are allowed for the Authenticator app push notifications and don't have it currently set up will see the prompt.
- ![User performs multifactor authentication](./media/how-to-nudge-authenticator-app/user-mfa.png)
+ ![Screenshot of multifactor authentication.](./media/how-to-mfa-registration-campaign/user-prompt.png)
1. User taps **Next** and steps through the Authenticator app setup. 1. First download the app.
- ![User downloads Microsoft Authenticator](media/how-to-mfa-registration-campaign/user-downloads-microsoft-authenticator.png)
+ ![Screenshot of download for Microsoft Authenticator.](media/how-to-mfa-registration-campaign/user-downloads-microsoft-authenticator.png)
1. See how to set up the Authenticator app.
- ![User sets up Microsoft Authenticator](./media/how-to-nudge-authenticator-app/setup.png)
+ ![Screenshot of Microsoft Authenticator.](./media/how-to-nudge-authenticator-app/setup.png)
1. Scan the QR Code.
- ![User scans QR Code](./media/how-to-nudge-authenticator-app/scan.png)
+ ![Screenshot of QR Code.](./media/how-to-nudge-authenticator-app/scan.png)
1. Approve the test notification.
- ![User approves the test notification](./media/how-to-nudge-authenticator-app/test.png)
+ ![Screenshot of test notification.](./media/how-to-nudge-authenticator-app/test.png)
1. Notification approved.
- ![Confirmation of approval](./media/how-to-nudge-authenticator-app/approved.png)
+ ![Screenshot of confirmation of approval.](./media/how-to-nudge-authenticator-app/approved.png)
1. Authenticator app is now successfully set up as the user's default sign-in method.
- ![Installation complete](./media/how-to-nudge-authenticator-app/finish.png)
+ ![Screenshot of installation complete.](./media/how-to-nudge-authenticator-app/finish.png)
-1. If a user wishes to not install the Authenticator app, they can tap **Not now** to snooze the prompt for up to 14 days, which can be set by an admin. Users with free and trial subscriptions can snooze the prompt up to three times.
-
- ![Snooze installation](./media/how-to-nudge-authenticator-app/snooze.png)
+1. If a user wishes to not install the Authenticator app, they can tap **Skip for now** to snooze the prompt for up to 14 days, which can be set by an admin. Users with free and trial subscriptions can snooze the prompt up to three times.
+ ![Screenshot of snooze option.](media/how-to-mfa-registration-campaign/snooze.png)
-## Enable the registration campaign policy using the Microsoft Entra admin center
+## Enable the registration campaign policy using the Microsoft Entra admin center
To enable a registration campaign in the Microsoft Entra admin center, complete the following steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
-1. Browse to **Protection** > **Authentication methods** > **Registration campaign**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) or [Global Administrator](../roles/permissions-reference.md#global-administrator).
+1. Browse to **Protection** > **Authentication methods** > **Registration campaign** and click **Edit**.
1. For **State**, click **Microsoft managed** or **Enabled**. In the following screenshot, the registration campaign is **Microsoft managed**. That setting allows Microsoft to set the default value to be either Enabled or Disabled. For the registration campaign, the Microsoft managed value is Enabled for voice call and text message users with free and trial subscriptions. For more information, see [Protecting authentication methods in Microsoft Entra ID](concept-authentication-default-enablement.md).
-
- ![Screenshot of enabling a registration campaign.](./media/how-to-nudge-authenticator-app/registration-campaign.png)
+
+ :::image type="content" border="true" source="media/how-to-mfa-registration-campaign/admin-experience.png" alt-text="Screenshot of enabling a registration campaign.":::
1. Select any users or groups to exclude from the registration campaign, and then click **Save**. ## Enable the registration campaign policy using Graph Explorer
-In addition to using the Microsoft Entra admin center, you can also enable the registration campaign policy using Graph Explorer. To enable the registration campaign policy, you must use the Authentication Methods Policy using Graph APIs. **Global Administrators** and **Authentication Method Policy Administrators** can update the policy.
+In addition to using the Microsoft Entra admin center, you can also enable the registration campaign policy using Graph Explorer. To enable the registration campaign policy, you must use the Authentication Methods Policy using Graph APIs. **Global Administrators** and **Authentication Policy Administrators** can update the policy.
To configure the policy using Graph Explorer:
To configure the policy using Graph Explorer:
To open the Permissions panel:
- ![Screenshot of Graph Explorer](./media/how-to-nudge-authenticator-app/permissions.png)
+ ![Screenshot of Graph Explorer.](./media/how-to-nudge-authenticator-app/permissions.png)
1. Retrieve the Authentication methods policy: `GET https://graph.microsoft.com/beta/policies/authenticationmethodspolicy` 1. Update the registrationEnforcement and authenticationMethodsRegistrationCampaign section of the policy to enable the nudge on a user or group.
- ![Campaign section](./media/how-to-nudge-authenticator-app/campaign.png)
+ ![Screenshot of the API response.](media/how-to-mfa-registration-campaign/response.png)
To update the policy, perform a PATCH on the Authentication Methods Policy with only the updated registrationEnforcement section: `PATCH https://graph.microsoft.com/beta/policies/authenticationmethodspolicy` The following table lists **authenticationMethodsRegistrationCampaign** properties.
-| Name | Possible values | Description |
+|Name|Possible values|Description|
||--|-|
-| state | "enabled"<br>"disabled"<br>"default" | Allows you to enable or disable the feature.<br>Default value is used when the configuration hasn't been explicitly set and will use Microsoft Entra ID default value for this setting. Currently maps to disabled.<br>Change states to either enabled or disabled as needed. |
-| snoozeDurationInDays | Range: 0 ΓÇô 14 | Defines the number of days before the user is nudged again.<br>If the value is 0, the user is nudged during every MFA attempt.<br>Default: 1 day |
-| includeTargets | N/A | Allows you to include different users and groups that you want the feature to target. |
-| excludeTargets | N/A | Allows you to exclude different users and groups that you want omitted from the feature. If a user is in a group that is excluded and a group that is included, the user will be excluded from the feature.|
+|snoozeDurationInDays|Range: 0 - 14|Defines the number of days before the user is nudged again.<br>If the value is 0, the user is nudged during every MFA attempt.<br>Default: 1 day|
+|enforceRegistrationAfterAllowedSnoozes|"true"<br>"false"|Dictates whether a user is required to perform setup after 3 snoozes.<br>If true, user is required to register.<br>If false, user can snooze indefinitely.<br>Default: true<br>Please note this property only comes into effect once the Microsoft managed value for the registration campaign will change to Enabled for text message and voice call for your organization.|
+|state|"enabled"<br>"disabled"<br>"default"|Allows you to enable or disable the feature.<br>Default value is used when the configuration hasn't been explicitly set and will use Microsoft Entra ID default value for this setting. Currently maps to disabled.<br>Change states to either enabled or disabled as needed.|
+|excludeTargets|N/A|Allows you to exclude different users and groups that you want omitted from the feature. If a user is in a group that is excluded and a group that is included, the user will be excluded from the feature.|
+|includeTargets|N/A|Allows you to include different users and groups that you want the feature to target.|
The following table lists **includeTargets** properties.
No. The feature, for now, aims to nudge users to set up the Authenticator app on
**Is there a way for me to hide the snooze option and force my users to setup the Authenticator app?**
-Users in organizations with free and trial subscriptions can postpone the app setup up to three times. There is no way to hide the snooze option on the nudge for organizations with paid subscriptions yet. You can set the snoozeDuration to 0, which ensures that users see the nudge during each MFA attempt.
+Set the **Limited number of snoozes** to **Enabled** such that users can postpone the app setup up to three times, after which setup is required.
**Will I be able to nudge my users if I am not using Microsoft Entra multifactor authentication?**
Yes. If they have been scoped for the nudge using the policy.
**What if the user closes the browser?**
-It's the same as snoozing.
+It's the same as snoozing. If setup is required for a user after they snoozed three times, the user will get prompted the next time they sign in.
**Why donΓÇÖt some users see a nudge when there is a Conditional Access policy for "Register security information"?**
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Previously updated : 09/23/2023 Last updated : 09/21/2023
Smart lockout helps lock out bad actors that try to guess your users' passwords
## How smart lockout works
-By default, smart lockout locks the account from sign-in attempts for one minute after 10 failed attempts for Azure Public and Microsoft Azure operated by 21Vianet tenants and 3 for Azure US Government tenants. The account locks again after each subsequent failed sign-in attempt, for one minute at first and longer in subsequent attempts. To minimize the ways an attacker could work around this behavior, we don't disclose the rate at which the lockout period grows over additional unsuccessful sign-in attempts.
+By default, smart lockout locks an account from sign-in after:
-Smart lockout tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior won't cause the account to lock out.
+- 10 failed attempts in Azure Public and Microsoft Azure operated by 21Vianet tenants
+- 3 failed attempts for Azure US Government tenants
+
+The account locks again after each subsequent failed sign-in attempt. The lockout period is one minute at first, and longer in subsequent attempts. To minimize the ways an attacker could work around this behavior, we don't disclose the rate at which the lockout period increases after unsuccessful sign-in attempts.
+
+Smart lockout tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior doesn't cause the account to lock out.
> [!NOTE] > Hash tracking functionality isn't available for customers with pass-through authentication enabled as authentication happens on-premises not in the cloud.
-Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). It is recommended to move to [managed authentication](https://www.microsoft.com/security/business/identity-access/upgrade-adfs).
+Federated deployments that use Active Directory Federation Services (AD FS) 2016 and AD FS 2019 can enable similar benefits by using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). It's recommended to move to [managed authentication](https://www.microsoft.com/security/business/identity-access/upgrade-adfs).
Smart lockout is always on, for all Microsoft Entra customers, with these default settings that offer the right mix of security and usability. Customization of the smart lockout settings, with values specific to your organization, requires Microsoft Entra ID P1 or higher licenses for your users. Using smart lockout doesn't guarantee that a genuine user is never locked out. When smart lockout locks a user account, we try our best to not lock out the genuine user. The lockout service attempts to ensure that bad actors can't gain access to a genuine user account. The following considerations apply:
-* Lockout state across Microsoft Entra data centers is synchronized. However, the total number of failed sign-in attempts allowed before an account is locked out will have slight variance from the configured lockout threshold. Once an account is locked out, it will be locked out everywhere across all Microsoft Entra data centers.
+* Lockout state across Microsoft Entra data centers is synchronized. However, the total number of failed sign-in attempts allowed before an account is locked out will have slight variance from the configured lockout threshold. Once an account is locked out, it's locked out everywhere across all Microsoft Entra data centers.
* Smart Lockout uses familiar location vs unfamiliar location to differentiate between a bad actor and the genuine user. Both unfamiliar and familiar locations have separate lockout counters.
+* After an account lockout, the user can initiate self-service password reset (SSPR) to sign in again. If the user chooses **I forgot my password** during SSPR, the duration of the lockout is reset to 0 seconds. If the user chooses **I know my password** during SSPR, the lockout timer continues, and the duration of the lockout isn't reset. To reset the duration and sign in again, the user needs to change their password.
Smart lockout can be integrated with hybrid deployments that use password hash sync or pass-through authentication to protect on-premises Active Directory Domain Services (AD DS) accounts from being locked out by attackers. By setting smart lockout policies in Microsoft Entra ID appropriately, attacks can be filtered out before they reach on-premises AD DS. When using [pass-through authentication](../hybrid/connect/how-to-connect-pta.md), the following considerations apply: * The Microsoft Entra lockout threshold is **less** than the AD DS account lockout threshold. Set the values so that the AD DS account lockout threshold is at least two or three times greater than the Microsoft Entra lockout threshold.
-* The Microsoft Entra lockout duration must be set longer than the AD DS account lockout duration. The Microsoft Entra duration is set in seconds, while the AD duration is set in minutes.
+* The Microsoft Entra lockout duration must be set longer than the AD DS account lockout duration. The Microsoft Entra duration is set in seconds, while the AD DS duration is set in minutes.
-For example, if you want your Microsoft Entra smart lockout duration to be higher than AD DS, then Microsoft Entra ID would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds). If you want your Microsoft Entra lockout threshold to be 5, then you want your on-premises AD lockout threshold to be 10. This configuration would ensure smart lockout prevents your on-premises AD accounts from being locked out by brute force attacks on your Microsoft Entra accounts.
+For example, if you want your Microsoft Entra smart lockout duration to be higher than AD DS, then Microsoft Entra ID would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds). If you want your Microsoft Entra lockout threshold to be 5, then you want your on-premises AD DS lockout threshold to be 10. This configuration would ensure smart lockout prevents your on-premises AD DS accounts from being locked out by brute force attacks on your Microsoft Entra accounts.
> [!IMPORTANT] > An administrator can unlock the users' cloud account if they have been locked out by the Smart Lockout capability, without the need of waiting for the lockout duration to expire. For more information, see [Reset a user's password using Azure Active Directory](../fundamentals/users-reset-password-azure-portal.md).
To verify your on-premises AD DS account lockout policy, complete the following
## Manage Microsoft Entra smart lockout values
-Based on your organizational requirements, you can customize the Microsoft Entra smart lockout values. Customization of the smart lockout settings, with values specific to your organization, requires Microsoft Entra ID P1 or higher licenses for your users. Customization of the smart lockout settings is not available for Microsoft Azure operated by 21Vianet tenants.
+Based on your organizational requirements, you can customize the Microsoft Entra smart lockout values. Customization of the smart lockout settings, with values specific to your organization, requires Microsoft Entra ID P1 or higher licenses for your users. Customization of the smart lockout settings isn't available for Microsoft Azure operated by 21Vianet tenants.
To check or modify the smart lockout values for your organization, complete the following steps:
To check or modify the smart lockout values for your organization, complete the
## Testing Smart lockout
-When the smart lockout threshold is triggered, you will get the following message while the account is locked:
+When the smart lockout threshold is triggered, you'll get the following message while the account is locked:
*Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.* When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Microsoft Entra authentication service.
-Smart lockout tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior won't cause the account to lock out.
+Smart lockout tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior doesn't cause the account to lock out.
## Default protections
-In addition to Smart lockout, Microsoft Entra ID also protects against attacks by analyzing signals including IP traffic and identifying anomalous behavior. Microsoft Entra ID will block these malicious sign-ins by default and return [AADSTS50053 - IdsLocked error code](../develop/reference-error-codes.md), regardless of the password validity.
+In addition to Smart lockout, Microsoft Entra ID also protects against attacks by analyzing signals including IP traffic and identifying anomalous behavior. Microsoft Entra ID blocks these malicious sign-ins by default and returns [AADSTS50053 - IdsLocked error code](../develop/reference-error-codes.md), regardless of the password validity.
## Next steps
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
The following JSON snippet demonstrates how to configure these properties.
## Step 4: Assign a custom claims provider to your app
-For tokens to be issued with claims incoming from the custom authentication extension, you must assign a custom claims provider to your application. This is based on the token audience, so the provider must be assgined to the client application to receive claims in an ID token, and to the resource application to receive claims in an access token. The custom claims provider relies on the custom authentication extension configured with the **token issuance start** event listener. You can choose whether all, or a subset of claims, from the custom claims provider are mapped into the token.
+For tokens to be issued with claims incoming from the custom authentication extension, you must assign a custom claims provider to your application. This is based on the token audience, so the provider must be assigned to the client application to receive claims in an ID token, and to the resource application to receive claims in an access token. The custom claims provider relies on the custom authentication extension configured with the **token issuance start** event listener. You can choose whether all, or a subset of claims, from the custom claims provider are mapped into the token.
Follow these steps to connect the *My Test application* with your custom authentication extension:
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md
The following differences apply depending on whether your app is running on a sh
| | | | | **Accounts** | Single account | Multiple accounts | | **Sign-in** | Global | Global |
-| **Sign-out** | Global | Each application can control if the sign-out is local to the app or for the family of applications. |
+| **Sign-out** | Global | Each application can control if the sign-out is local to the app. |
| **Supported account types** | Work accounts only | Personal and work accounts supported | ## Why you may want to only support single-account mode
active-directory Quickstart Web App Aspnet Core Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-aspnet-core-sign-in.md
In this article you register a web application in the Microsoft Entra admin cent
## Register the application in the Microsoft Entra admin center -
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
-1. Browse to **Identity** > **Applications** > **App registrations** and select **New registration**.
-1. For **Name**, enter a name for the application. For example, enter **AspNetCore-Quickstart**. Users of the app will see this name, and can be changed later.
-1. Set the **Redirect URI** type to **Web** and value to `https://localhost:44321/signin-oidc`.
-1. Select **Register**.
-1. Under **Manage**, select **Authentication**.
-1. For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
-1. Under **Implicit grant and hybrid flows**, select **ID tokens**.
-1. Select **Save**.
-1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-1. Enter a **Description**, for example `clientsecret1`.
-1. Select **In 1 year** for the secret's expiration.
-1. Select **Add** and immediately record the secret's **Value** for use in a later step. The secret value is *never displayed again* and is irretrievable by any other means. Record it in a secure location as you would any password.
-
-### Download the ASP.NET Core project
-
-[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
-
-### Configure your ASP.NET Core project
-
-1. Extract the *.zip* file to a local folder that's close to the root of the disk to avoid errors caused by path length limitations on Windows. For example, extract to *C:\Azure-Samples*.
-1. Open the solution in the chosen code editor.
-1. In *appsettings.json*, replace the values of `ClientId`, and `TenantId`. The value for the application (client) ID and the directory (tenant) ID, can be found in the app's **Overview** page on the Microsoft Entra admin center.
-
- ```json
- "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
- "ClientId": "Enter_the_Application_Id_here",
- "TenantId": "common",
- ```
-
- - `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
- - Replace `Enter_the_Tenant_Info_Here` with one of the following:
- - If the application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). The directory (tenant) ID can be found on the app's **Overview** page.
- - If the application supports **Accounts in any organizational directory**, replace this value with `organizations`.
- - If the application supports **All Microsoft account users**, leave this value as `common`.
- - Replace `Enter_the_Client_Secret_Here` with the **Client secret** that was created and recorded in an earlier step.
-
-For this quickstart, don't change any other values in the *appsettings.json* file.
-
-### Build and run the application
- 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 1. Browse to **Identity** > **Applications** > **App registrations**. 1. On the page that appears, select **+ New registration**.
To obtain the sample application, you can either clone it from GitHub or downloa
dotnet dev-certs https -ep ./certificate.crt --trust ``` -
- | *appsettings.json* key | Description |
- ||-|
- | `ClientId` | Application (client) ID of the application registered in the Microsoft Entra admin center. |
- | `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
- | `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
- 1. Return to the Microsoft Entra admin center, and under **Manage**, select **Certificates & secrets** > **Upload certificate**. 1. Select the **Certificates (0)** tab, then select **Upload certificate**. 1. An **Upload certificate** pane appears. Use the icon to navigate to the certificate file you created in the previous step, and select **Open**. 1. Enter a description for the certificate, for example *Certificate for aspnet-web-app*, and select **Add**. 1. Record the **Thumbprint** value for use in the next step. - ## Configure the project 1. In your IDE, open the project folder, *ms-identity-docs-code-dotnet\web-app-aspnet*, containing the sample.
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
To enforce consistent naming conventions for Microsoft 365 groups created or edi
> [!IMPORTANT] > Using Microsoft Entra ID naming policy for Microsoft 365 groups requires that you possess but not necessarily assign a Microsoft Entra ID P1 license or Microsoft Entra Basic EDU license for each unique user that is a member of one or more Microsoft 365 groups.
-The naming policy is applied to creating or editing groups created across workloads (for example, Outlook, Microsoft Teams, SharePoint, Exchange, or Planner), even if no editing changes are made. It is applied to both the group name and group alias. If you set up your naming policy in Microsoft Entra ID and you have an existing Exchange group naming policy, the Microsoft Entra ID naming policy is enforced in your organization.
+The naming policy is applied to creating or editing groups created across workloads (for example, Outlook, Microsoft Teams, SharePoint, Exchange, or Planner), even if no editing changes are made. It's applied to both the group name and group alias. If you set up your naming policy in Microsoft Entra ID and you have an existing Exchange group naming policy, the Microsoft Entra ID naming policy is enforced in your organization.
-When group naming policy is configured, the policy will be applied to new Microsoft 365 groups created by end users. Naming policy does not apply to certain directory roles, such as Global Administrator or User Administrator (please see below for the complete list of roles exempted from group naming policy). For existing Microsoft 365 groups, the policy will not immediately apply at the time of configuration. Once group owner edits the group name for these groups, naming policy will be enforced, even if no changes are made.
+When group naming policy is configured, the policy will be applied to new Microsoft 365 groups created by end users. Naming policy doesn't apply to certain directory roles, such as Global Administrator or User Administrator (please see below for the complete list of roles exempted from group naming policy). For existing Microsoft 365 groups, the policy won't immediately apply at the time of configuration. Once group owner edits the group name for these groups, naming policy will be enforced, even if no changes are made.
## Naming policy features
You can enforce naming policy for groups in two different ways:
The general structure of the naming convention is ΓÇÿPrefix[GroupName]SuffixΓÇÖ. While you can define multiple prefixes and suffixes, you can only have one instance of the [GroupName] in the setting. The prefixes or suffixes can be either fixed strings or user attributes such as \[Department\] that are substituted based on the user who is creating the group. The total allowable number of characters for your prefix and suffix strings including group name is 63 characters.
-Prefixes and suffixes can contain special characters that are supported in group name and group alias. Any characters in the prefix or suffix that are not supported in the group alias are still applied in the group name, but removed from the group alias. Because of this restriction, the prefixes and suffixes applied to the group name might be different from the ones applied to the group alias.
+Prefixes and suffixes can contain special characters that are supported in group name and group alias. Any characters in the prefix or suffix that aren't supported in the group alias are still applied in the group name, but removed from the group alias. Because of this restriction, the prefixes and suffixes applied to the group name might be different from the ones applied to the group alias.
#### Fixed strings
A blocked word list is a comma-separated list of phrases to be blocked in group
Blocked word list rules: -- Blocked words are not case sensitive.
+- Blocked words aren't case sensitive.
- When a user enters a blocked word as part of a group name, they see an error message with the blocked word. - There are no character restrictions on blocked words.-- There is an upper limit of 5000 phrases that can be configured in the blocked words list.
+- There's an upper limit of 5000 phrases that can be configured in the blocked words list.
### Roles and permissions
Be sure to uninstall any older version of the Azure Active Directory PowerShell
Install-Module AzureADPreview ```
- If you are prompted about accessing an untrusted repository, enter **Y**. It might take few minutes for the new module to install.
+ If you're prompted about accessing an untrusted repository, enter **Y**. It might take few minutes for the new module to install.
## Configure naming policy in PowerShell
That's it. You've set your naming policy and added your blocked words.
For more information, see the article [Microsoft Entra cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md).
-Here is an example of a PowerShell script to export multiple blocked words:
+Here's an example of a PowerShell script to export multiple blocked words:
``` PowerShell $Words = (Get-AzureADDirectorySetting).Values | Where-Object -Property Name -Value CustomBlockedWordsList -EQ Add-Content "c:\work\currentblockedwordslist.txt" -Value $words.value.Split(",").Replace("`"","") ```
-Here is an example PowerShell script to import multiple blocked words:
+Here's an example PowerShell script to import multiple blocked words:
``` PowerShell $BadWords = Get-Content "C:\work\currentblockedwordslist.txt"
Microsoft Teams | Microsoft Teams shows the group naming policy enforced name wh
SharePoint | SharePoint shows the naming policy enforced name when the user types a site name or group email address. When a user enters a custom blocked word, an error message is shown, along with the blocked word so that the user can remove it. Microsoft Stream | Microsoft Stream shows the group naming policy enforced name when the user types a group name or group email alias. When a user enters a custom blocked word, an error message is shown with the blocked word so the user can remove it. Outlook iOS and Android App | Groups created in Outlook apps are compliant with the configured naming policy. Outlook mobile app doesn't yet show the preview of the naming policy enforced name, and doesn't return custom blocked word errors when the user enters the group name. However, the naming policy is automatically applied on clicking create/edit and users see error messages if there are custom blocked words in the group name or alias.
-Groups mobile app | Groups created in the Groups mobile app are compliant with the naming policy. Groups mobile app does not show the preview of the naming policy and does not return custom blocked word errors when the user enters the group name. But the naming policy is automatically applied when creating or editing a group and users is presented with appropriate errors if there are custom blocked words in the group name or alias.
+Groups mobile app | Groups created in the Groups mobile app are compliant with the naming policy. Groups mobile app doesn't show the preview of the naming policy and doesn't return custom blocked word errors when the user enters the group name. But the naming policy is automatically applied when creating or editing a group and users is presented with appropriate errors if there are custom blocked words in the group name or alias.
Planner | Planner is compliant with the naming policy. Planner shows the naming policy preview when entering the plan name. When a user enters a custom blocked word, an error message is shown when creating the plan.
+Project for the web | Project for the web is compliant with the naming policy.
Dynamics 365 for Customer Engagement | Dynamics 365 for Customer Engagement is compliant with the naming policy. Dynamics 365 shows the naming policy enforced name when the user types a group name or group email alias. When the user enters a custom blocked word, an error message is shown with the blocked word so the user can remove it. School Data Sync (SDS) | Groups created through SDS comply with naming policy, but the naming policy isn't applied automatically. SDS administrators have to append the prefixes and suffixes to class names for which groups need to be created and then uploaded to SDS. Group create or edit would fail otherwise. Classroom app | Groups created in Classroom app comply with the naming policy, but the naming policy isn't applied automatically, and the naming policy preview isn't shown to the users while entering a classroom group name. Users must enter the enforced classroom group name with prefixes and suffixes. If not, the classroom group create or edit operation fails with errors. Power BI | Power BI workspaces are compliant with the naming policy.
-Yammer | When a user signed in to Yammer with their Microsoft Entra account creates a group or edits a group name, the group name will comply with naming policy. This applies both to Microsoft 365 connected groups and all other Yammer groups.<br>If a Microsoft 365 connected group was created before the naming policy is in place, the group name will not automatically follow the naming policies. When a user edits the group name, they will be prompted to add the prefix and suffix.
-StaffHub | StaffHub teams do not follow the naming policy, but the underlying Microsoft 365 group does. StaffHub team name does not apply the prefixes and suffixes and does not check for custom blocked words. But StaffHub does apply the prefixes and suffixes and removes blocked words from the underlying Microsoft 365 group.
+Yammer | When a user signed in to Yammer with their Microsoft Entra account creates a group or edits a group name, the group name will comply with naming policy. This applies both to Microsoft 365 connected groups and all other Yammer groups.<br>If a Microsoft 365 connected group was created before the naming policy is in place, the group name won't automatically follow the naming policies. When a user edits the group name, they'll be prompted to add the prefix and suffix.
+StaffHub | StaffHub teams do not follow the naming policy, but the underlying Microsoft 365 group does. StaffHub team name doesn't apply the prefixes and suffixes and doesn't check for custom blocked words. But StaffHub does apply the prefixes and suffixes and removes blocked words from the underlying Microsoft 365 group.
Exchange PowerShell | Exchange PowerShell cmdlets are compliant with the naming policy. Users receive appropriate error messages with suggested prefixes and suffixes and for custom blocked words if they don't follow the naming policy in the group name and group alias (mailNickname). Azure Active Directory PowerShell cmdlets | Azure Active Directory PowerShell cmdlets are compliant with naming policy. Users receive appropriate error messages with suggested prefixes and suffixes and for custom blocked words if they don't follow the naming convention in group names and group alias. Exchange admin center | Exchange admin center is compliant with naming policy. Users receive appropriate error messages with suggested prefixes and suffixes and for custom blocked words if they don't follow the naming convention in the group name and group alias.
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
If the security token type is **Normal** for your custom task extension, you'd s
Policy name: AzureADLifecycleWorkflowsAuthPolicy
- Policy type: Microsoft Entra ID
+ Policy type: AAD
|Claim |Value | |||
If the security token type is **Normal** for your custom task extension, you'd s
Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App
- Policy type: Microsoft Entra ID
+ Policy type: AAD
|Claim |Value | |||
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
Previously updated : 09/12/2023 Last updated : 09/18/2023 # How to synchronize attributes for Lifecycle workflows
-Workflows, contain specific tasks, which can run automatically against users based on the specified execution conditions. Automatic workflow scheduling is supported based on the employeeHireDate and employeeLeaveDateTime user attributes in Microsoft Entra ID.
+
+Workflows contain specific tasks, which can run automatically against users based on the specified execution conditions. Automatic workflow scheduling is supported based on the employeeHireDate and employeeLeaveDateTime user attributes in Microsoft Entra ID.
To take full advantage of Lifecycle Workflows, user provisioning should be automated, and the scheduling relevant attributes should be synchronized. ## Scheduling relevant attributes+ The following table shows the scheduling (trigger) relevant attributes and the methods of synchronization that are supported. |Attribute|Type|Supported in HR Inbound Provisioning|Support in Microsoft Entra Connect Cloud Sync|Support in Microsoft Entra Connect Sync|
This document explains how to set up synchronization from on-premises Microsoft
## Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting
-The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must be formatted in a specific way. This means that you may need to use an expression to convert the value of your source attribute to a format that will be accepted by the EmployeeHireDate or EmployeeLeaveDateTime. The table below outlines the format that is expected and provides an example expression on how to convert the values.
+The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must be formatted in a specific way. This means that you may need to use an expression to convert the value of your source attribute to a format that will be accepted by the EmployeeHireDate or EmployeeLeaveDateTime. The following table outlines the format that is expected and provides an example expression on how to convert the values.
|Scenario|Expression/Format|Target|More Information| |--|--|--|--|
-|Workday to Active Directory User Provisioning|FormatDateTime([StatusHireDate], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for Workday](../saas-apps/workday-inbound-tutorial.md#below-are-some-example-attribute-mappings-between-workday-and-active-directory-with-some-common-expressions)|
+|Workday to Active Directory User Provisioning|FormatDateTime([StatusHireDate], "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for Workday](../saas-apps/workday-inbound-tutorial.md#below-are-some-example-attribute-mappings-between-workday-and-active-directory-with-some-common-expressions)|
|SuccessFactors to Active Directory User Provisioning|FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt","yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for SAP Success Factors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)| |Custom import to Active Directory|Must be in the format "yyyyMMddHHmmss.fZ"|On-premises AD string attribute|| |Microsoft Graph User API|Must be in the format "YYYY-MM-DDThh:mm:ssZ"|EmployeeHireDate and EmployeeLeaveDateTime||
The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must
For more information on expressions, see [Reference for writing expressions for attribute mappings in Microsoft Entra ID](../app-provisioning/functions-for-customizing-application-data.md)
-The expression examples above use endDate for SAP and StatusHireDate for Workday. However, you may opt to use different attributes.
+The expression examples in the table use endDate for SAP and StatusHireDate for Workday. However, you may opt to use different attributes.
For example, you might use StatusContinuousFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be:
StatusOriginalHireDate|Workday|Joiner|EmployeeHireDate|
For more attributes, see the [Workday attribute reference](../app-provisioning/workday-attribute-reference.md) and [SAP SuccessFactors attribute reference](../app-provisioning/sap-successfactors-attribute-reference.md) - ## Importance of time To ensure timing accuracy of scheduled workflows itΓÇÖs crucial to consider: - The time portion of the attribute must be set accordingly, for example the `employeeHireDate` should have a time at the beginning of the day like 1AM or 5AM and the `employeeLeaveDateTime` should have time at the end of the day like 9PM or 11PM-- The Workflows won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it had run before the employee arrives.
+- The Workflows won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it runs before the employee arrives.
- It's recommended, that if you're using temporary access pass (TAP), that you set the maximum lifetime to 24 hours. Doing this will help ensure that the TAP hasn't expired after being sent to an employee who may be in a different timezone. For more information, see [Configure Temporary Access Pass in Microsoft Entra ID to register Passwordless authentication methods.](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) - When importing the data, you should understand if and how the source provides time zone information for your users to potentially make adjustments to ensure timing accuracy.
-<a name='create-a-custom-sync-rule-in-azure-ad-connect-cloud-sync-for-employeehiredate'></a>
- ## Create a custom sync rule in Microsoft Entra Connect cloud sync for EmployeeHireDate
- The following steps will guide you through creating a synchronization rule using cloud sync.
+ The following steps guide you through creating a synchronization rule using cloud sync.
1. In the Microsoft Entra admin center, browse to > **Hybrid management** > **Microsoft Entra Connect**.
- 2. Select **Manage Microsoft Entra cloud sync**.
- 3. Under **Configuration**, select your configuration.
- 4. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen.
- 5. Select **Add attribute**.
- 6. Fill in the following information:
+ 1. Select **Manage Microsoft Entra cloud sync**.
+ 1. Under **Configuration**, select your configuration.
+ 1. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen.
+ 1. Select **Add attribute**.
+ 1. Fill in the following information:
- Mapping Type: Direct
- - Source attribute: extensionAttribute1
+ - Source attribute: msDS-cloudExtensionAttribute1
- Default value: Leave blank - Target attribute: employeeHireDate - Apply this mapping: Always
- 7. Select **Apply**.
- 8. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
- 9. Select **Save schema**.
+ :::image type="content" source="media/how-to-lifecycle-workflow-sync-attributes/edit-cloud-attribute-mapping.png" alt-text="Screenshot of the cloud attribute mapping.":::
+ 1. Select **Apply**.
+ 1. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
+ 1. Select **Save schema**.
For more information on attributes, see [Attribute mapping in Microsoft Entra Connect cloud sync.](../hybrid/cloud-sync/how-to-attribute-mapping.md)
-<a name='how-to-create-a-custom-sync-rule-in-azure-ad-connect-for-employeehiredate'></a>
- ## How to create a custom sync rule in Microsoft Entra Connect for EmployeeHireDate
-The following example will walk you through setting up a custom synchronization rule that synchronizes the Active Directory attribute to the employeeHireDate attribute in Microsoft Entra ID.
-
+The following example walks you through setting up a custom synchronization rule that synchronizes the Active Directory attribute to the employeeHireDate attribute in Microsoft Entra ID.
1. Open a PowerShell window as administrator and run `Set-ADSyncScheduler -SyncCycleEnabled $false` to disable the scheduler.
- 2. Go to Start\Azure AD Connect\ and open the Synchronization Rules Editor
- 3. Ensure the direction at the top is set to **Inbound**.
- 4. Select **Add Rule.**
- 5. On the **Create Inbound synchronization rule** screen, enter the following information and select **Next**.
+ 1. Go to Start\Azure AD Connect\ and open the Synchronization Rules Editor
+ 1. Ensure the direction at the top is set to **Inbound**.
+ 1. Select **Add Rule.**
+ 1. On the **Create Inbound synchronization rule** screen, enter the following information and select **Next**.
- Name: In from AD - EmployeeHireDate - Connected System: contoso.com - Connected System Object Type: user - Metaverse Object Type: person - Precedence: 200 ![Screenshot of creating an inbound synchronization rule basics.](media/how-to-lifecycle-workflow-sync-attributes/create-inbound-rule.png)
- 6. On the **Scoping filter** screen, select **Next.**
- 7. On the **Join rules** screen, select **Next**.
- 8. On the **Transformations** screen, Under **Add transformations,** enter the following information.
+ 1. On the **Scoping filter** screen, select **Next.**
+ 1. On the **Join rules** screen, select **Next**.
+ 1. On the **Transformations** screen, Under **Add transformations,** enter the following information.
- FlowType: Direct - Target Attribute: employeeHireDate - Source: msDS-cloudExtensionAttribute1 ![Screenshot of creating inbound synchronization rule transformations.](media/how-to-lifecycle-workflow-sync-attributes/create-inbound-rule-transformations.png)
- 9. Select **Add**.
- 10. In the Synchronization Rules Editor, ensure the direction at the top is set to **Outbound**.
- 11. Select **Add Rule.**
- 12. On the **Create Outbound synchronization rule** screen, enter the following information and select **Next**.
+ 1. Select **Add**.
+ 1. In the Synchronization Rules Editor, ensure the direction at the top is set to **Outbound**.
+ 1. Select **Add Rule.**
+ 1. On the **Create Outbound synchronization rule** screen, enter the following information and select **Next**.
- Name: Out to Microsoft Entra ID - EmployeeHireDate - Connected System: &lt;your tenant&gt; - Connected System Object Type: user - Metaverse Object Type: person - Precedence: 201
- 13. On the **Scoping filter** screen, select **Next.**
- 14. On the **Join rules** screen, select **Next**.
- 15. On the **Transformations** screen, Under **Add transformations,** enter the following information.
+ 1. On the **Scoping filter** screen, select **Next.**
+ 1. On the **Join rules** screen, select **Next**.
+ 1. On the **Transformations** screen, Under **Add transformations,** enter the following information.
- FlowType: Direct - Target Attribute: employeeHireDate - Source: employeeHireDate ![Screenshot of create outbound synchronization rule transformations.](media/how-to-lifecycle-workflow-sync-attributes/create-outbound-rule-transformations.png)
- 16. Select **Add**.
- 17. Close the Synchronization Rules Editor
- 18. Enable the scheduler again by running `Set-ADSyncScheduler -SyncCycleEnabled $true`.
+ 1. Select **Add**.
+ 1. Close the Synchronization Rules Editor
+ 1. Enable the scheduler again by running `Set-ADSyncScheduler -SyncCycleEnabled $true`.
> [!NOTE] >- **msDS-cloudExtensionAttribute1** is an example source.
The following example will walk you through setting up a custom synchronization
For more information, see [How to customize a synchronization rule](../hybrid/connect/how-to-connect-create-custom-sync-rule.md) and [Make a change to the default configuration.](../hybrid/connect/how-to-connect-sync-change-the-configuration.md)
+## Edit attribute mapping in the provisioning application
+
+Once you have set up your provisioning application, you're able to edit its attribute mapping. When the app is created, you get a list of default mappings between your HRM and Active Directory. From there you can either edit the existing mapping, or add new mapping.
+
+To update this mapping, you'd do the following:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](../roles/permissions-reference.md#global-administrator).
+
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
+
+1. Open your provisioned application.
+
+1. Select Provisioning and then select **Edit attribute Mapping**.
+
+1. Select **Show advanced options**, and then select edit Attribute list for On Premise Active Directory.
+ :::image type="content" source="media/how-to-lifecycle-workflow-sync-attributes/edit-on-prem-attribute.png" alt-text="Screenshot of editing on-premises attribute.":::
+1. Add your source attribute(s) created as Type String, and select on the CheckBox for required.
+ :::image type="content" source="media/how-to-lifecycle-workflow-sync-attributes/edit-attribute-list.png" alt-text="Screenshot of source api list.":::
+ > [!NOTE]
+ > The number, and name, of source attributes added will depend on which attributes you are syncing.
+1. Select Save.
+
+1. From there you must map the HRM attributes to the added Active Directory attributes. To do this, Add New Mapping using an Expression.
+
+1. Your expression must match the formatting found in the [Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting](how-to-lifecycle-workflow-sync-attributes.md#understanding-employeehiredate-and-employeeleavedatetime-formatting) section.
+ :::image type="content" source="media/how-to-lifecycle-workflow-sync-attributes/attribute-formatting-expression.png" alt-text="Screenshot of setting attribute format.":::
+1. Select ok.
<a name='how-to-verify-these-attribute-values-in-azure-ad'></a>
Get-MgUser -UserId "44198096-38ea-440d-9497-bb6b06bcaf9b" | Select-Object Displa
## Next steps - [What are lifecycle workflows?](what-are-lifecycle-workflows.md) - [Create a custom workflow using the Microsoft Entra admin center](tutorial-onboard-custom-workflow-portal.md)
+- [Configure API-driven inbound provisioning app (Public preview)](../app-provisioning/inbound-provisioning-api-configure-app.md)
- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Tutorial Prepare User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-user-accounts.md
$Department = "Sales"
$UPN_manager = "bsimon@<your tenant name here>" Install-Module -Name AzureAD
-Connect-AzureAD -Confirm
+Connect-MgGraph -Confirm
$PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $PasswordProfile.Password = "<Password>"
-New-AzureADUser -DisplayName $Displayname_manager -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_manager -AccountEnabled $true -MailNickName $Name_manager -Department $Department
-New-AzureADUser -DisplayName $Displayname_employee -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_employee -AccountEnabled $true -MailNickName $Name_employee -Department $Department
+New-MgUser -DisplayName $Displayname_manager -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_manager -AccountEnabled $true -MailNickName $Name_manager -Department $Department
+New-MgUser -DisplayName $Displayname_employee -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_employee -AccountEnabled $true -MailNickName $Name_employee -Department $Department
``` Once your user(s) has been successfully created in Microsoft Entra ID, you may proceed to follow the Lifecycle workflow tutorials for your workflow creation.
active-directory Multi Tenant Organization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-configure-graph.md
Previously updated : 08/22/2023 Last updated : 09/22/2023
If you instead want to use the Microsoft 365 admin center to configure a multi-t
![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** -- Microsoft Entra ID P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
+- For license information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization. - [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions. ![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant** -- Microsoft Entra ID P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
+- For license information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization. - [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
active-directory Multi Tenant Organization Configure Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-configure-templates.md
Previously updated : 08/22/2023 Last updated : 09/22/2023
This article describes how to configure a policy template for your multi-tenant
## Prerequisites -- Microsoft Entra ID P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
+- For license information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization. - [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
active-directory Howto Use Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md
Previously updated : 08/24/2023 Last updated : 09/21/2023
Some recommendations may require a P2 or other license. For more information, se
## How to read a recommendation
-To view the details of a recommendation:
+Most recommendations follow the same pattern. You're provided information about how the recommendation work, its value, and some action steps to address the recommendation. This section provides an overview of the details provided in a recommendation, but aren't specific to one recommendation.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
-1. Browse to **Identity** > **Overview** > **Recommendations tab**
+1. Browse to **Identity** > **Overview** > **Recommendations tab**.
1. Select a recommendation from the list. ![Screenshot of the list of recommendations.](./media/howto-use-recommendations/recommendations-list.png)
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
na Previously updated : 07/11/2023 Last updated : 09/21/2023 - # Customer intent: As a Microsoft Entra administrator, I want guidance to so that I can keep my Microsoft Entra tenant in a healthy state.
The Microsoft Entra recommendations feature is the Microsoft Entra specific impl
On a daily basis, Microsoft Entra ID analyzes the configuration of your tenant. During this analysis, Microsoft Entra ID compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Identity Overview area. The recommendations are listed in order of priority so you can quickly determine where to focus first.
-![Screenshot of the Overview page of the tenant with the Recommendations option highlighted.](./media/overview-recommendations/recommendations-preview-option-tenant-overview.png)
+![Screenshot of the Overview page of the tenant with the Recommendations option highlighted.](./media/overview-recommendations/recommendations-overview.png)
Each recommendation contains a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*, so your step-by-step action plan impacts the entire tenant and not just a specific resource.
The recommendations listed in the following table are currently available in pub
| [Migrate from ADAL to MSAL](recommendation-migrate-from-adal-to-msal.md) | Applications | All licenses | Generally available | | [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | All licenses | Preview | | [Minimize MFA prompts from known devices](recommendation-mfa-from-known-devices.md) | Users | All licenses | Generally available |
-| [Remove unused applications](recommendation-remove-unused-apps.md) | Applications | Microsoft Entra ID P2 | Preview |
-| [Remove unused credentials from applications](recommendation-remove-unused-credential-from-apps.md) | Applications | Microsoft Entra ID P2 | Preview |
-| [Renew expiring application credentials](recommendation-renew-expiring-application-credential.md) | Applications | Microsoft Entra ID P2 | Preview |
-| [Renew expiring service principal credentials](recommendation-renew-expiring-service-principal-credential.md) | Applications | Microsoft Entra ID P2 | Preview |
+| [Remove unused applications](recommendation-remove-unused-apps.md) | Applications | [Microsoft Entra Workload ID Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-id) | Preview |
+| [Remove unused credentials from applications](recommendation-remove-unused-credential-from-apps.md) | Applications | [Microsoft Entra Workload ID Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-id) | Preview |
+| [Renew expiring application credentials](recommendation-renew-expiring-application-credential.md) | Applications | [Microsoft Entra Workload ID Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-id) | Preview |
+| [Renew expiring service principal credentials](recommendation-renew-expiring-service-principal-credential.md) | Applications | [Microsoft Entra Workload ID Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-id) | Preview |
Microsoft Entra-only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed.
active-directory Recommendation Mfa From Known Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md
Previously updated : 03/07/2023 Last updated : 09/21/2023 -- # Microsoft Entra recommendation: Minimize MFA prompts from known devices
active-directory Recommendation Migrate Apps From Adfs To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md
Previously updated : 03/25/2023 Last updated : 09/22/2023 -- # Microsoft Entra recommendation: Migrate apps from ADFS to Microsoft Entra ID
active-directory Recommendation Migrate From Adal To Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md
Previously updated : 08/15/2023 Last updated : 09/21/2023
[Microsoft Entra recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
-This article covers the recommendation to migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. This recommendation is called `AdalToMsalMigration` in the recommendations API in Microsoft Graph.
+This article covers the recommendation to migrate from the Azure Active Directory Authentication Library (ADAL) to the Microsoft Authentication Libraries. This recommendation is called `AdalToMsalMigration` in the recommendations API in Microsoft Graph.
## Description
-The Azure Active Directory Authentication Library (ADAL) is currently slated for end-of-support on June 30, 2023. We recommend that customers migrate to Microsoft Authentication Libraries (MSAL), which replaces ADAL.
+ADAL is currently slated for end-of-support on June 30, 2023. We recommend that customers migrate to Microsoft Authentication Libraries (MSAL), which replaces ADAL.
This recommendation shows up if your tenant has applications that still use ADAL. The service marks any application in your tenant that makes a token request from the ADAL as an ADAL application. Applications that use both ADAL and MSAL are marked as ADAL applications.
You can use Microsoft Graph to identify apps that need to be migrated to MSAL. T
1. Select **GET** as the HTTP method from the dropdown. 1. Set the API version to **beta**. 1. Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
+ - `https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources`
-```http
-https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources
-```
The following response provides the details of the impacted resources using ADAL:
You can run the following set of commands in Windows PowerShell. These commands
## Frequently asked questions
+Review the following common questions as you work with the ADAL to MSAL recommendation.
+ ### Why does it take 30 days to change the status to completed? To reduce false positives, the service uses a 30 day window for ADAL requests. This way, the service can go several days without an ADAL request and not be falsely marked as completed.
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
Previously updated : 03/07/2023 Last updated : 09/21/2023 -- # Microsoft Entra recommendation: Migrate to Microsoft Authenticator (preview)
This article covers the recommendation to migrate users to the Microsoft Authent
## Description
-Multi-factor authentication (MFA) is a key component to improve the security posture of your Microsoft Entra tenant. While SMS text and voice calls were once commonly used for multi-factor authentication, they are becoming increasingly less secure. You also don't want to overwhelm your users with lots of MFA methods and messages.
+Multi-factor authentication (MFA) is a key component to improve the security posture of your Microsoft Entra tenant. While SMS text and voice calls were once commonly used for multi-factor authentication, they're becoming increasingly less secure. You also don't want to overwhelm your users with lots of MFA methods and messages.
One way to ease the burden on your users while also increasing the security of their authentication methods is to migrate anyone using SMS or voice call for MFA to use the Microsoft Authenticator app.
active-directory Recommendation Remove Unused Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-apps.md
Previously updated : 05/24/2023 Last updated : 09/21/2023 - # Microsoft Entra recommendation: Remove unused applications (preview) [Microsoft Entra recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
active-directory Recommendation Remove Unused Credential From Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-credential-from-apps.md
Previously updated : 03/07/2023 Last updated : 09/21/2023 - # Microsoft Entra recommendation: Remove unused credentials from apps (preview) [Microsoft Entra recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
active-directory Recommendation Renew Expiring Application Credential https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-renew-expiring-application-credential.md
Previously updated : 03/07/2023 Last updated : 09/21/2023 - # Microsoft Entra recommendation: Renew expiring application credentials (preview) [Microsoft Entra recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
Applications that the recommendation identified appear in the list of **Impacted
## Known limitations -- Currently in the list of **Impacted resources**, only the app name and resource ID are shown. The key ID for the credential that needs to be rotated is not shown. To find the key ID credential, navigate back to **App registrations** > **Certificates and Secrets** for the application.
+- Currently in the list of **Impacted resources**, only the app name and resource ID are shown. The key ID for the credential that needs to be rotated isn't shown. To find the key ID credential, navigate back to **App registrations** > **Certificates and Secrets** for the application.
-- An **Impacted resource** with credentials that expired recently will be marked as **Complete**. If that resource has more than one credential expiring soon, the status of the resource will be **Active**.
+- An **Impacted resource** with credentials that expired recently are as **Complete**. If that resource has more than one credential expiring soon, the status of the resource is **Active**.
## Next steps
active-directory Recommendation Renew Expiring Service Principal Credential https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-renew-expiring-service-principal-credential.md
Previously updated : 03/07/2023 Last updated : 09/21/2023 - # Microsoft Entra recommendation: Renew expiring service principal credentials (preview)
This article covers the recommendation to renew expiring service principal crede
## Description
-A Microsoft Entra service principal is the local representation of an application object in a single tenant or directory. The service principal defines who can access an application and what resources the application can access. Authentication of service principals is often completed using certificate credentials, which have a lifespan. If the credentials expire, the application won't be able to authenticate with your tenant.
+A Microsoft Entra service principal is the local representation of an application object in a single tenant or directory. The service principal defines who can access an application and what resources the application can access. Authentication of service principals is often completed using certificate credentials, which have a lifespan. If the credentials expire, the application can't authenticate with your tenant.
This recommendation shows up if your tenant has service principals with credentials that will expire soon.
When renewing service principal credentials using Microsoft Graph, you need to r
## Known limitations -- This recommendation identifies service principal credentials that are about to expire, so if they do expire, the recommendation doesn't distinguish between the credential expiring on its own or being addressed by the user.
+- This recommendation identifies service principal credentials that are about to expire. If they do expire, the recommendation doesn't distinguish between the credential expiring on its own or if you addressed it.
-- Service principal credentials that expire before the recommendation is completed will be marked complete by the system.
+- Service principal credentials that expire before the recommendation is completed are complete by the system.
- The recommendation currently doesn't display the password secret credential in service principal when you select an **Impacted resource** from the list.
active-directory Recommendation Turn Off Per User Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md
Previously updated : 03/03/2023 Last updated : 09/21/2023 -- # Microsoft Entra recommendation: Switch from per-user MFA to Conditional Access MFA [Microsoft Entra recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
-This article covers the recommendation to switch per-user multifactor authentication accounts to Conditional Access MFA accounts. This recommendation is called `switchFromPerUserMFA` in the recommendations API in Microsoft Graph.
+This article covers the recommendation to switch per-user multifactor authentication (MFA) accounts to Conditional Access MFA accounts. This recommendation is called `switchFromPerUserMFA` in the recommendations API in Microsoft Graph.
## Description As an admin, you want to maintain security for your companyΓÇÖs resources, but you also want your employees to easily access resources as needed. MFA enables you to enhance the security posture of your tenant.
-In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in, with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on. While enabling MFA is a good practice, switching per-user MFA to MFA based on [Conditional Access](../conditional-access/overview.md) can reduce the number of times your users are prompted for MFA.
+In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in. There are some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on. While enabling MFA is a good practice, switching per-user MFA to MFA based on [Conditional Access](../conditional-access/overview.md) can reduce the number of times your users are prompted for MFA.
This recommendation shows up if:
active-directory Asana Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-provisioning-tutorial.md
Title: 'Tutorial: Configure Asana for automatic user provisioning with Microsoft Entra ID'
-description: Learn how to automatically provision and de-provision user accounts from Microsoft Entra ID to Asana.
+description: Learn how to automatically provision and deprovision user accounts from Microsoft Entra ID to Asana.
writer: twimmers
# Tutorial: Configure Asana for automatic user provisioning
-This tutorial describes the steps you need to do in both Asana and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and de-provisions users and groups to [Asana](https://www.asana.com/) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to do in both Asana and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and deprovisions users and groups to [Asana](https://www.asana.com/) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
## Capabilities Supported
The Microsoft Entra provisioning service allows you to scope who will be provisi
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5: Configure automatic user provisioning to Asana
This section guides you through the steps to configure the Microsoft Entra provi
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications**
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
1. In the applications list, select **Asana**.
- ![The Asana link in the Applications list](common/all-applications.png)
+ ![Screenshot of the Asana link in the Applications list.](common/all-applications.png)
1. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
1. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
1. In the **Admin Credentials** section, input your Asana Tenant URL and Secret Token provided by Asana. Click **Test Connection** to ensure Microsoft Entra ID can connect to Asana. If the connection fails, contact Asana to check your account setup.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of token.](common/provisioning-testconnection-tenanturltoken.png)
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of notification email.](common/provisioning-notification-email.png)
1. Select **Save**. 1. In the **Mappings** section, select **Synchronize Microsoft Entra users to Asana**.
-1. Review the user attributes that are synchronized from Microsoft Entra ID to Asana in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Asana for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Asana API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Microsoft Entra ID to Asana in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Asana for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Asana API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering|Required by Asana| |||||
This section guides you through the steps to configure the Microsoft Entra provi
|name.formatted|String|| |title|String|| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].locality|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
1. Under the **Mappings** section, select **Synchronize Microsoft Entra groups to Asana**.
This section guides you through the steps to configure the Microsoft Entra provi
1. To enable the Microsoft Entra provisioning service for Asana, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of Provisioning status toggled on.](common/provisioning-toggle-on.png)
1. Define the users and groups that you would like to provision to Asana by choosing the appropriate values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of Provisioning scope.](common/provisioning-scope.png)
1. When you're ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to execute than next cycles, which occur approximately every 40 minutes as long as the Microsoft Entra provisioning service is running.
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Change log * 11/06/2021 - Dropped support for **externalId, name.givenName and name.familyName**. Added support for **preferredLanguage , title and urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department**. Enabled **Group Provisioning**. * 05/23/2023 - Dropped support for **preferredLanguage** Added support for **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager**.
+* 09/07/2023 - Added support for **addresses[type eq "work"].locality, addresses[type eq "work"].region, addresses[type eq "work"].country, phoneNumbers[type eq "work"].value, urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber, urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter,
+urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization and urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division**.
## More resources
active-directory Foundu Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foundu-tutorial.md
Previously updated : 11/21/2022 Last updated : 09/20/2023
Follow these steps to enable Microsoft Entra SSO.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.foundu.com.au/saml`
Follow these steps to enable Microsoft Entra SSO.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.foundu.com.au/saml/consume`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<CUSTOMER_NAME>.foundu.com.au/saml/login`
+ c. In the **Logout URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.foundu.com.au/saml/logout`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [foundU Client support team](mailto:help@foundu.com.au) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Logout URL. Contact [foundU Client support team](mailto:help@foundu.com.au) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
Follow these steps to enable Microsoft Entra SSO.
### Create a Microsoft Entra test user
-In this section, you'll create a test user called B.Simon.
+In this section, you'll create a test user in the Azure portal called B.Simon.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). 1. Browse to **Identity** > **Users** > **All users**.
In this section, you'll create a test user called B.Simon.
### Assign the Microsoft Entra test user
-In this section, you'll enable B.Simon to use single sign-on by granting access to foundU.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to foundU.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). 1. Browse to **Identity** > **Applications** > **Enterprise applications** > **foundU**.
In this section, you'll enable B.Simon to use single sign-on by granting access
![Screenshot for foundU sso configuration](./media/foundu-tutorial/configuration-1.png)
- a. Copy **Identifier(Entity ID)** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration section**.
+ a. Copy **Identifier(Entity ID)** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration section** in the Azure portal.
- b. Copy **Reply URL (Assertion Consumer Service URL)** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration section**.
+ b. Copy **Reply URL (Assertion Consumer Service URL)** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration section** in the Azure portal.
- c. Copy **Logout URL** value, paste this value into the **Logout URL** text box in the **Basic SAML Configuration section**.
+ c. Copy **Logout URL** value, paste this value into the **Logout URL** text box in the **Basic SAML Configuration section** in the Azure portal.
- d. In the **Entity ID** textbox, paste the **Identifier** value which you copied previously.
+ d. In the **Entity ID** textbox, paste the **Identifier** value, which you have copied from the Azure portal.
- e. In the **Single Sign-on Service URL** textbox, paste the **Login URL** value which you copied previously.
+ e. In the **Single Sign-on Service URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal.
- f. In the **Single Logout Service URL** textbox, paste the **Logout URL** value which you copied previously.
+ f. In the **Single Logout Service URL** textbox, paste the **Logout URL** value, which you have copied from the Azure portal.
g. Click **Choose File** to upload the downloaded **Certificate (Base64)** file from Azure portal.
In this section, you test your Microsoft Entra single sign-on configuration with
#### SP initiated:
-* Click on **Test this application**, this will redirect to foundU Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to foundU Sign on URL where you can initiate the login flow.
* Go to foundU Sign-on URL directly and initiate the login flow from there. #### IDP initiated:
-* Click on **Test this application**, and you should be automatically signed in to the foundU for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the foundU for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the foundU tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the foundU for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the foundU tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the foundU for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Insite Lms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insite-lms-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2: Configure Insite LMS to support provisioning with Microsoft Entra ID To generate the Secret Token
-1. Login to [Insite LMS Admin Console](https://portal.insitelms.net/organization/applications).
-1. Navigate to **Self Hosted Jobs**. You find a job named ΓÇ£SCIMΓÇ¥.
+1. Log in to [Insite LMS Console](https://portal.insitelms.net) with your Admin account.
+1. Navigate to **Applications** module on the left hand side menu.
+1. In the section **Self hosted Jobs**, you'll find a job named ΓÇ£SCIMΓÇ¥. If you can't find the job, contact the Insite LMS support team.
![Screenshot of generate API Key.](media/insite-lms-provisioning-tutorial/generate-api-key.png) 1. Click on **Generate Api Key**. Copy and save the **Api Key**. This value is entered in the **Secret Token** field in the Provisioning tab of your Insite LMS application.
->[!NOTE]
->The Access Token is only valid for 1 year.
+> [!NOTE]
+> The Api Key is only valid for 1 year and needs to be renewed manually before it expires.
<a name='step-3-add-insite-lms-from-the-azure-ad-application-gallery'></a>
active-directory Parallels Desktop Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parallels-desktop-tutorial.md
Previously updated : 05/25/2023 Last updated : 09/21/2023
Complete the following steps to enable Microsoft Entra single sign-on.
## Configure Parallels Desktop SSO
-To configure single sign-on on **Parallels Desktop** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from the application configuration to [Parallels Desktop support team](https://www.parallels.com/support/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Parallels Desktop** side, follow the latest version of Parallels's Azure SSO setup guide on [this page](https://kb.parallels.com/en/129240). If you encounter any difficulties throughout the setup process, contact [Parallels Desktop support team](https://www.parallels.com/support/).
### Create Parallels Desktop test user
-In this section, you create a user called Britta Simon at Parallels Desktop. Work with [Parallels Desktop support team](https://www.parallels.com/support/) to add the users in the Parallels Desktop platform. Users must be created and activated before you use single sign-on.
+Add existing user accounts to the Admin or User groups on the Azure AD side, following Parallels's Azure SSO setup guide that can be found on [this page](https://kb.parallels.com/en/129240). When a user account gets deactivated following their departure from the organization, that is immediately reflected in the user count of the Parallels's product license.
## Test SSO
ai-services Concept Analyze Document Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-analyze-document-response.md
A language element describes the detected language for content specified via spa
### Semantic elements
+> [!NOTE]
+> The semantic elements discussed here apply to Document Intelligence prebuilt models. Your custom models may return different data representations. For example, date and time returned by a custom model may be represented in a pattern that differs from standard ISO 8601 formatting.
+ #### Document A document is a semantically complete unit. A file may contain multiple documents, such as multiple tax forms within a PDF file, or multiple receipts within a single page. However, the ordering of documents within the file doesn't fundamentally affect the information it conveys.
A document is a semantically complete unit. A file may contain multiple documen
The document type describes documents sharing a common set of semantic fields, represented by a structured schema, independent of its visual template or layout. For example, all documents of type "receipt" may contain the merchant name, transaction date, and transaction total, although restaurant and hotel receipts often differ in appearance.
-A document element includes the list of recognized fields from among the fields specified by the semantic schema of the detected document type. A document field may be extracted or inferred. Extracted fields are represented via the extracted content and optionally its normalized value, if interpretable. Inferred fields don't have content property and are represented only via its value. Array fields don't include a content property, as the content can be concatenated from the content of the array elements. Object fields do contain a content property that specifies the full content representing the object, which may be a superset of the extracted subfields.
+A document element includes the list of recognized fields from among the fields specified by the semantic schema of the detected document type:
+
+* A document field may be extracted or inferred. Extracted fields are represented via the extracted content and optionally its normalized value, if interpretable.
+
+* An inferred field doesn't have content property and is represented only via its value.
+
+* An array field doesn't include a content property. The content can be concatenated from the content of the array elements.
+
+* An object field does contain a content property that specifies the full content representing the object that may be a superset of the extracted subfields.
The semantic schema of a document type is described via the fields it may contain. Each field schema is specified via its canonical name and value type. Field value types include basic (ex. string), compound (ex. address), and structured (ex. array, object) types. The field value type also specifies the semantic normalization performed to convert detected content into a normalization representation. Normalization may be locale dependent.
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
Previously updated : 08/30/2023 Last updated : 09/22/2023
curl --request POST \
} } }'
-```
+```
+
+## Copy projects across language resources
+
+Often you can copy conversational language understanding projects from one resource to another using the **copy** button in Azure Language Studio. However in some cases, it might be easier to copy projects using the API.
+
+First, identify the:
+ * source project name
+ * target project name
+ * source language resource
+ * target language resource, which is where you want to copy it to.
+
+Call the API to authorize the copy action, and get the `accessTokens` for the actual copy operation later.
+
+```console
+curl --request POST \
+ --url 'https://<target-language-resource>.cognitiveservices.azure.com//language/authoring/analyze-conversations/projects/<source-project-name>/:authorize-copy?api-version=2023-04-15-preview' \
+ --header 'Content-Type: application/json' \
+ --header 'Ocp-Apim-Subscription-Key: <Your-Subscription-Key>' \
+ --data '{"projectKind":"Conversation","allowOverwrite":false}'
+```
+
+Call the API to complete the copy operation. Use the response you got earlier as the payload.
+
+```console
+curl --request POST \
+ --url 'https://<source-language-resource>.cognitiveservices.azure.com/language/authoring/analyze-conversations/projects/<source-project-name>/:copy?api-version=2023-04-15-preview' \
+ --header 'Content-Type: application/json' \
+ --header 'Ocp-Apim-Subscription-Key: <Your-Subscription-Key>\
+ --data '{
+"projectKind": "Conversation",
+"targetProjectName": "<target-project-name>",
+"accessToken": "<access-token>",
+"expiresAt": "<expiry-date>",
+"targetResourceId": "<target-resource-id>",
+"targetResourceRegion": "<target-region>"
+}'
+```
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
You can protect Azure OpenAI resources in [virtual networks and private endpoint
### Azure Cognitive Search resources
-If you have an Azure Cognitive Search resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in ten business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request.
+If you have an Azure Cognitive Search resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaioyd). The application will be reviewed in ten business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request.
:::image type="content" source="../media/use-your-data/approve-private-endpoint.png" alt-text="A screenshot showing private endpoint approval screen." lightbox="../media/use-your-data/approve-private-endpoint.png":::
When customizing the app, we recommend:
##### Important considerations -- Publishing creates an Azure App Service in your subscription. It may incur costs depending on the -
-[pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.
+- Publishing creates an Azure App Service in your subscription. It may incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.
- By default, the app will only be accessible to you. To add authentication (for example, restrict access to the app to members of your Azure tenant): 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**.
ai-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/setup-platform.md
Title: Install the Speech SDK
-description: In this quickstart, you'll learn how to install the Speech SDK for your preferred programming language.
+description: In this quickstart, you learn how to install the Speech SDK for your preferred programming language.
Previously updated : 09/16/2022 Last updated : 09/05/2023 zone_pivot_groups: programming-languages-speech-sdk
zone_pivot_groups: programming-languages-speech-sdk
## Next steps
-* [Speech to text quickstart](../get-started-speech-to-text.md)
-* [Text to speech quickstart](../get-started-text-to-speech.md)
-* [Speech translation quickstart](../get-started-speech-translation.md)
+- [Speech to text quickstart](../get-started-speech-to-text.md)
+- [Text to speech quickstart](../get-started-text-to-speech.md)
+- [Speech translation quickstart](../get-started-speech-translation.md)
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
AKS generates the same kinds of monitoring data as other Azure resources that ar
Source | Description | |:|:| | Platform metrics | [Platform metrics](monitor-aks-reference.md#metrics) are automatically collected for AKS clusters at no cost. You can analyze these metrics with [metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) or use them for [metric alerts](../azure-monitor/alerts/alerts-types.md#metric-alerts). |
-| Prometheus metrics | Prometheus metrics are collected by [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) and stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). Analyze them with dashboards in [Azure Managed Grafana](../managed-grafan). |
+| Prometheus metrics | When you [enable metric scraping](../azure-monitor/containers/prometheus-metrics-enable.md) for your cluster, [Prometheus metrics](../azure-monitor/containers/prometheus-metrics-scrape-default.md) are collected by [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) and stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). Analyze them with [prebuilt dashboards](../azure-monitor/visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in [Azure Managed Grafana](../managed-grafan). |
| Activity logs | [Activity log](monitor-aks-reference.md) is collected automatically for AKS clusters at no cost. These logs track information such as when a cluster is created or has a configuration change. Send the [Activity log to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with your other log data. | | Resource logs | [Control plane logs](monitor-aks-reference.md#resource-logs) for AKS are implemented as resource logs. [Create a diagnostic setting](#resource-logs) to send them to [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) where you can analyze and alert on them with log queries in [Log Analytics](../azure-monitor/logs/log-analytics-overview.md). | | Container insights | Container insights collects various logs and performance data from a cluster including stdout/stderr streams and stores them in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) and [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Analyze this data with views and workbooks included with Container insights or with [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) and [metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). |
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
This article shows you how to create a static public IP address and assign it to
## Before you begin
-* This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
* You need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * This article covers using a *Standard* SKU IP with a *Standard* SKU load balancer. For more information, see [IP address types and allocation methods in Azure][ip-sku].
-## Create a static IP address
+## Create an AKS cluster
+
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myNetworkResourceGroup --location eastus
+ ```
-1. Create a resource group for your IP address
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command.
```azurecli-interactive
- az group create --name myNetworkResourceGroup
+ az aks create --name myAKSCluster --resource-group myNetworkResourceGroup --generate-ssh-keys
```
-2. Use the [`az network public ip create`][az-network-public-ip-create] command to create a static public IP address. The following example creates a static IP resource named *myAKSPublicIP* in the *myNetworkResourceGroup* resource group.
+## Create a static IP address
+
+1. Create a static public IP address using the [`az network public ip create`][az-network-public-ip-create] command.
```azurecli-interactive az network public-ip create \
This article shows you how to create a static public IP address and assign it to
> [!NOTE] > If you're using a *Basic* SKU load balancer in your AKS cluster, use *Basic* for the `--sku` parameter when defining a public IP. Only *Basic* SKU IPs work with the *Basic* SKU load balancer and only *Standard* SKU IPs work with *Standard* SKU load balancers.
-3. After you create the static public IP address, use the [`az network public-ip list`][az-network-public-ip-list] command to get the IP address. Specify the name of the node resource group and public IP address you created, and query for the *ipAddress*.
+2. Get the name of the node resource group using the [`az aks show`][az-aks-show] command and query for the `nodeResourceGroup` property.
+
+ ```azurecli-interactive
+ az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query nodeResourceGroup -o tsv
+ ```
+
+3. Get the static public IP address using the [`az network public-ip list`][az-network-public-ip-list] command. Specify the name of the node resource group and public IP address you created, and query for the `ipAddress`.
```azurecli-interactive
- az network public-ip show --resource-group myNetworkResourceGroup --name myAKSPublicIP --query ipAddress --output tsv
+ az network public-ip show --resource-group <node resource group> --name myAKSPublicIP --query ipAddress --output tsv
``` ## Create a service using the static IP address
-1. Before creating a service, use the [`az role assignment create`][az-role-assignment-create] command to ensure the cluster identity used by the AKS cluster has delegated permissions to the node resource group.
+1. Ensure the cluster identity used by the AKS cluster has delegated permissions to the node resource group using the [`az role assignment create`][az-role-assignment-create] command.
```azurecli-interactive
- CLIENT_ID=$(az aks show --name <cluster name> --resource-group <cluster resource group> --query identity.principalId -o tsv)
- RG_SCOPE=$(az group show --name myNetworkResourceGroup --query id -o tsv)
+ CLIENT_ID=$(az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query identity.principalId -o tsv)
+ RG_SCOPE=$(az group show --name <node resource group> --query id -o tsv)
az role assignment create \ --assignee ${CLIENT_ID} \ --role "Network Contributor" \
This article shows you how to create a static public IP address and assign it to
2. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name. > [!IMPORTANT]
- > Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address.
+ > Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address, as shown in the example YAML.
```yaml apiVersion: v1 kind: Service metadata: annotations:
- service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group>
+ service.beta.kubernetes.io/azure-load-balancer-ipv4: <public IP address>
name: azure-load-balancer spec:
- loadBalancerIP: 40.121.183.52
type: LoadBalancer ports: - port: 80
This article shows you how to create a static public IP address and assign it to
app: azure-load-balancer ```
-3. Use the `kubectl apply` command to create the service and deployment.
+3. Set a public-facing DNS label to the service using the `service.beta.kubernetes.io/azure-dns-label-name` service annotation. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so we recommend you use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
- ```console
- kubectl apply -f load-balancer-service.yaml
+ > [!NOTE]
+ > If you want to publish the service on your own domain, see [Azure DNS][azure-dns-zone] and the [external-dns][external-dns] project.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group>
+ service.beta.kubernetes.io/azure-load-balancer-ipv4: <public IP address>
+ service.beta.kubernetes.io/azure-dns-label-name: <unique-service-label>
+ name: azure-load-balancer
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-load-balancer
```
-## Apply a DNS label to the service
-
-If your service uses a dynamic or static public IP address, you can use the `service.beta.kubernetes.io/azure-dns-label-name` service annotation to set a public-facing DNS label. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so it's recommended to use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN.
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- annotations:
- service.beta.kubernetes.io/azure-dns-label-name: myserviceuniquelabel
- name: azure-load-balancer
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-load-balancer
-```
+4. Create the service and deployment using the `kubectl apply` command.
-To see the DNS label for your load balancer, run the following command:
+ ```console
+ kubectl apply -f load-balancer-service.yaml
+ ```
-```console
-kubectl describe service azure-load-balancer
-```
+5. To see the DNS label for your load balancer, use the `kubectl describe service` command.
-The DNS label will be listed under the `Annotations`, as shown in the following condensed example output:
+ ```console
+ kubectl describe service azure-load-balancer
+ ```
-```console
-Name: azure-load-balancer
-Namespace: default
-Labels: <none>
-Annotations: service.beta.kuberenetes.io/azure-dns-label-name: myserviceuniquelabel
-...
-```
+ The DNS label will be listed under the `Annotations`, as shown in the following condensed example output:
-> [!NOTE]
-> To publish the service on your own domain, see [Azure DNS][azure-dns-zone] and the [external-dns][external-dns] project.
+ ```output
+ Name: azure-load-balancer
+ Namespace: default
+ Labels: <none>
+ Annotations: service.beta.kuberenetes.io/azure-dns-label-name: <unique-service-label>
+ ```
## Troubleshoot
-If the static IP address defined in the *loadBalancerIP* property of the Kubernetes service manifest doesn't exist or hasn't been created in the node resource group and there are no additional delegations configured, the load balancer service creation fails. To troubleshoot, review the service creation events using the [`kubectl describe`][kubectl-describe] command. Provide the name of the service specified in the YAML manifest, as shown in the following example:
+If the static IP address defined in the `loadBalancerIP` property of the Kubernetes service manifest doesn't exist or hasn't been created in the node resource group and there are no other delegations configured, the load balancer service creation fails. To troubleshoot, review the service creation events using the [`kubectl describe`][kubectl-describe] command. Provide the name of the service specified in the YAML manifest, as shown in the following example:
```console kubectl describe service azure-load-balancer ```
-The output will show you information about the Kubernetes service resource. The following example output shows a `Warning` in the `Events`: "`user supplied IP address was not found`." In this scenario, make sure you've created the static public IP address in the node resource group and that the IP address specified in the Kubernetes service manifest is correct.
+The output shows you information about the Kubernetes service resource. The following example output shows a `Warning` in the `Events`: "`user supplied IP address was not found`." In this scenario, make sure you created the static public IP address in the node resource group and that the IP address specified in the Kubernetes service manifest is correct.
-```console
+```output
Name: azure-load-balancer Namespace: default Labels: <none>
Events:
## Next steps
-For additional control over the network traffic to your applications, you may want to [create an ingress controller][aks-ingress-basic]. You can also [create an ingress controller with a static public IP address][aks-static-ingress].
+For more control over the network traffic to your applications, you may want to [create an ingress controller][aks-ingress-basic]. You can also [create an ingress controller with a static public IP address][aks-static-ingress].
<!-- LINKS - External --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
For additional control over the network traffic to your applications, you may wa
[az-network-public-ip-list]: /cli/azure/network/public-ip#az_network_public_ip_list [aks-ingress-basic]: ingress-basic.md [aks-static-ingress]: ingress-static-ip.md
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [ip-sku]: ../virtual-network/ip-services/public-ip-addresses.md#sku [az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create [az-aks-show]: /cli/azure/aks#az-aks-show
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-group-create]: /cli/azure/group#az-group-create
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
You can either [grant access to your key vault for the identity after cluster cr
5. Open a web browser to the external IP address of the `gmsa-demo` service. 6. Authenticate with the `$NETBIOS_DOMAIN_NAME\$AD_USERNAME` and password and confirm you see `Authenticated as $NETBIOS_DOMAIN_NAME\$AD_USERNAME, Type of Authentication: Negotiate`.
+### Disable GMSA on an existing cluster
+
+* Disable GMSA on an existing cluster with Windows Server nodes using the [`az aks update`][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --disable-windows-gmsa
+ ```
+> [!NOTE]
+> You can re-enable GMSA on an existing cluster by using the [az aks update][az-aks-update] command.
+ ## Troubleshooting ### No authentication is prompted when loading the page
api-management Front Door Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md
In the following example, the same operation in the Demo Conference API is calle
## Restrict incoming traffic to API Management instance
-Use API Management policies to ensure that your API Management instance accepts traffic only from Azure Front Door. You can accomplish this restriction using one or both of the [following methods](../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-):
+Use API Management policies to ensure that your API Management instance accepts traffic only from Azure Front Door. You can accomplish this restriction using one or both of the [following methods](../frontdoor/front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-):
1. Restrict incoming IP addresses to your API Management instances 1. Restrict traffic based on the value of the `X-Azure-FDID` header
api-management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-samples.md
- Title: Azure PowerShell samples-
-description: "Learn about the Azure PowerShell sample scripts available for Azure API Management, such as 'Add a user' and 'Import API'."
------- Previously updated : 10/09/2017----
-# Azure PowerShell samples for API Management
-
-The following table contains sample scripts for working with the API Management service from PowerShell.
-
-| Provision and manage | Description |
-| -- | -- |
-|[Add a user](./scripts/powershell-add-user-and-get-subscription-key.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates a user in API Management and gets a subscription key.|
-|[Create an APIM service](./scripts/powershell-create-apim-service.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Creates a Developer SKU API Management Service.|
-|[Restore service](./scripts/powershell-backup-restore-apim-service.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Backups and restores an APIM service.|
-|[Scale an APIM service](./scripts/powershell-scale-and-addregion-apim-service.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Scales and adds region to the APIM service.|
-|[Set up custom domain](./scripts/powershell-setup-custom-domain.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Sets up custom domain on proxy and portal endpoint of the API Management service.|
-|**Define API**| **Description** |
-|[Import API](./scripts/powershell-import-api-and-add-to-product.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Imports an API and adds to an APIM product.|
-|**Secure**| **Description** |
-|[Secure backend](./scripts/powershell-secure-backend-with-mutual-certificate-authentication.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Secures backend with mutual certificate authentication.|
-|**Protect**| **Description** |
-|[Set up rate limit policy](./scripts/powershell-setup-rate-limit-policy.md?toc=%2fpowershell%2fmodule%2ftoc.json)|Applies rate limit to policy at the product Level . |
api-management Powershell Add User And Get Subscription Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-add-user-and-get-subscription-key.md
- Title: Azure PowerShell Script Sample - Add a user | Microsoft Docs
-description: Learn how to add a user in API Management and get a subscription key. See a sample script and view additional available resources.
------- Previously updated : 11/16/2017----
-# Add a user
-
-This sample script creates a user in API Management and gets a subscription key.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/add-user-and-get-subscription-key/add_a_user_and_get_a_subscriptionKey.ps1 "Add a user")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Backup Restore Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-backup-restore-apim-service.md
- Title: Azure PowerShell Script Sample - Backup and restore service | Microsoft Docs
-description: Learn how to backup and restore the API management service instance. See a sample script and view additional available resources.
------- Previously updated : 11/16/2017----
-# Backup and restore service
-
-The sample script in this article shows how to backup and restore the API Management service instance.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/backup-restore-apim-service/backup_restore_apim_service.ps1 "Backup and restore the APIM service instance")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Create Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-create-apim-service.md
- Title: Azure PowerShell Script Sample - Create an APIM service | Microsoft Docs
-description: Learn how to create an API Management (APIM) service. See a sample script and view additional available resources.
------- Previously updated : 11/16/2017----
-# Create an API Management service
-
-This sample script creates a Developer SKU API Management Service.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/create-apim-service/create_apim_service.ps1 "Create a service")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Import Api And Add To Product https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-import-api-and-add-to-product.md
- Title: Azure PowerShell Script Sample - Import an API | Microsoft Docs
-description: Learn how to import an API and add it to an API Management product. See a sample script and view additional available resources.
------- Previously updated : 11/16/2017----
-# Import an API
-
-This sample script imports an API and adds it to an API Management product.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/import-api-and-add-to-product/import_an_api_and_add_to_product.ps1 "Import an API")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Scale And Addregion Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-scale-and-addregion-apim-service.md
- Title: Azure PowerShell Script Sample - Scale the service instance | Microsoft Docs
-description: Learn how to scale and add regions to the API Management service instance. See a sample script and view additional available resources.
------- Previously updated : 11/16/2017----
-# Scale the service instance
-
-This sample script scales and adds region to the API Management service instance.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/scale-and-addregion-apim-service/scale_and_addregion_apim_service.ps1 "Scale the APIM service instance")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Secure Backend With Mutual Certificate Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-secure-backend-with-mutual-certificate-authentication.md
- Title: Azure PowerShell Script Sample - Secure back end | Microsoft Docs
-description: Learn how to use an Azure PowerShell script sample to secure backend with mutual certificate authentication.
------- Previously updated : 11/16/2017----
-# Secure back end
-
-This sample script secures backend with mutual certificate authentication.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/secure-backend-with-mutual-certificate-authentication/secure_backend_with_mutual_certificate_authentication.ps1 "Secures backend")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Setup Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-setup-custom-domain.md
- Title: Azure PowerShell Script Sample - Set up custom domain | Microsoft Docs
-description: Learn how to set up a custom domain on proxy or portal endpoints of the API management service. See sample scripts and view additional available resources.
------- Previously updated : 12/14/2017----
-# Set up custom domain
-
-This sample script sets up custom domain on proxy and portal endpoint of the API Management service.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/setup-custom-domain/setup_custom_domain.ps1 "Set up custom domain")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
--
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
api-management Powershell Setup Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-setup-rate-limit-policy.md
- Title: Azure PowerShell Script Sample - Set up rate limit policy | Microsoft Docs
-description: Learn how to set up rate limit policy with Azure PowerShell. See a sample script and view additional available resources.
------- Previously updated : 11/16/2017----
-# Set up rate limit policy
-
-This sample script sets up rate limit policy.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/api-management/setup-rate-limit-policy/setup_rate_limit_policy.ps1 "Set up rate limit policy")]
-
-## Clean up resources
-
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional Azure PowerShell samples for Azure API Management can be found in the [PowerShell samples](../powershell-samples.md).
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
# Update Management overview > [!Important]
-> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. [Azure Update Manager (preview)](../../update-center/overview.md) (AUM) is the v2 version of Automation Update management and the future of Update management in Azure. AUM is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md).
-> - Guidance for migrating from Automation Update management to Azure Update Manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Azure Update manager or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Azure Update Manager (preview).
+> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**.
+> - [Azure Update Manager](../../update-center/overview.md) (AUM) is the v2 version of Automation Update management and the future of Update management in Azure. AUM is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md).
+> - Follow [guidance](../../update-center/guidance-migration-automation-update-management-azure-update-manager.md) to migrate machines and schedules from Automation Update Management to Azure Update Manager.
+> - If you are using Automation Update Management, we recommend that you continue to use the Log Analytics agent and *not* migrate to the Azure Monitor agent until machines and schedules are migrated to Azure Update Manager.
+> - The Log Analytics agent wouldn't be deprecated before moving all Automation Update Management customers to Update Manager.
You can use Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines in Azure, physical or VMs in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates and manage the process of installing required updates for your machines reporting to Update Management.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Set up disaster recovery for your Automation accounts to handle a region-wide or
### Availability zones support for Azure Automation
-Azure Automation now supports [Azure availability zones](../reliability/availability-zones-overview.md#availability-zones) to provide improved resiliency and reliability by providing high availability to the service, runbooks, and other Automation assets. [Learn more](automation-availability-zones.md).
+Azure Automation now supports [Azure availability zones](../reliability/availability-zones-overview.md#zonal-and-zone-redundant-services) to provide improved resiliency and reliability by providing high availability to the service, runbooks, and other Automation assets. [Learn more](automation-availability-zones.md).
## July 2022
Users can now restore an Automation account deleted within 30 days. Read [here](
**Type:** New feature
-New scripts are added to the Azure Automation [GitHub organisation](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](./automation-create-alert-triggered-runbook.md#common-azure-vm-management-operations).
+New scripts are added to the Azure Automation [GitHub organization](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](./automation-create-alert-triggered-runbook.md#common-azure-vm-management-operations).
- [Stop-Azure-VM-On-Alert](https://github.com/azureautomation/Stop-Azure-VM-On-Alert) - [Restart-Azure-VM-On-Alert](https://github.com/azureautomation/Restart-Azure-VM-On-Alert)
azure-app-configuration Enable Dynamic Configuration Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-python.md
+
+ Title: Use dynamic configuration in Python (preview)
+
+description: Learn how to dynamically update configuration data for Python
+++
+ms.devlang: python
+ Last updated : 09/13/2023++
+#Customer intent: As a Python developer, I want to dynamically update my app to use the latest configuration data in App Configuration.
+
+# Tutorial: Use dynamic configuration in Python (preview)
+
+This tutorial shows how you can enable dynamic configuration updates in Python. It builds a script to leverage the App Configuration provider library for its built-in configuration caching and refreshing capabilities.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up your app to update its configuration in response to changes in an App Configuration store.
+
+> [!NOTE]
+> Requires [azure-appconfiguration-provider](https://pypi.org/project/azure-appconfiguration-provider/1.1.0b1/) package version 1.1.0b1 or later.
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free)
+- We assume you already have an App Configuration store. To create one, [create an App Configuration store](quickstart-aspnet-core-app.md).
+
+## Sentinel key
+
+A *sentinel key* is a key that you update after you complete the change of all other keys. Your app monitors the sentinel key. When a change is detected, your app refreshes all configuration values. This approach helps to ensure the consistency of configuration in your app and reduces the overall number of requests made to your App Configuration store, compared to monitoring all keys for changes.
+
+## Reload data from App Configuration
+
+1. Create a new Python file named *app.py* and add the following code:
+
+ ```python
+ from azure.appconfiguration.provider import load, SentinelKey
+ from azure.appconfiguration import (
+ AzureAppConfigurationClient,
+ ConfigurationSetting,
+ )
+ import os
+ import time
+
+ connection_string = os.environ.get("APPCONFIGURATION_CONNECTION_STRING")
+
+ # Setting up a configuration setting with a known value
+ client = AzureAppConfigurationClient.from_connection_string(connection_string)
+
+ # Creating a configuration setting to be refreshed
+ configuration_setting = ConfigurationSetting(key="message", value="Hello World!")
+
+ # Creating a Sentinel key to monitor
+ sentinel_setting = ConfigurationSetting(key="Sentinel", value="1")
+
+ # Setting the configuration setting in Azure App Configuration
+ client.set_configuration_setting(configuration_setting=configuration_setting)
+ client.set_configuration_setting(configuration_setting=sentinel_setting)
+
+ # Connecting to Azure App Configuration using connection string, and refreshing when the configuration setting message changes
+ config = load(
+ connection_string=connection_string,
+ refresh_on=[SentinelKey("Sentinel")],
+ refresh_interval=1, # Default value is 30 seconds, shorted for this sample
+ )
+
+ # Printing the initial value
+ print(config["message"])
+ print(config["Sentinel"])
+
+ # Updating the configuration setting to a new value
+ configuration_setting.value = "Hello World Updated!"
+
+ # Updating the sentinel key to a new value, only after this is changed can a refresh happen
+ sentinel_setting.value = "2"
+
+ # Setting the updated configuration setting in Azure App Configuration
+ client.set_configuration_setting(configuration_setting=configuration_setting)
+ client.set_configuration_setting(configuration_setting=sentinel_setting) # Should always be done last to make sure all other keys included in the refresh
+
+ # Waiting for the refresh interval to pass
+ time.sleep(2)
+
+ # Refreshing the configuration setting
+ config.refresh()
+
+ # Printing the updated value
+ print(config["message"])
+ print(config["Sentinel"])
+ ```
+
+1. Run your script:
+
+ ```cli
+ python app.py
+ ```
+
+1. Verify Output:
+++
+## Next steps
+
+In this tutorial, you enabled your Python app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| |--|--|--|--|--|
-|HPE Superdome Flex 280 | 1.23.5 | 1.22.0_2023-08-08 | 16.0.5100.7242 |Not validated|
+|HPE Superdome Flex 280 | 1.25.12 | 1.22.0_2023-08-08 | 16.0.5100.7242 |Not validated|
|HPE Apollo 4200 Gen10 Plus | 1.22.6 | 1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)| ### Kublr
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
-| TKGm 2.2 | 1.25.7 | 1.19.0_2023-05-09 | 16.0.937.6223 | 14.5 (Ubuntu 20.04)
-| TKGm 2.1.0 | 1.24.9 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)
-| TKGm 1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)
-| TKGm 1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)|
+|TKGm 2.3|1.26.5|1.23.0_2023-09-12|16.0.5100.7246|14.5 (Ubuntu 20.04)|
+|TKGm 2.2|1.25.7|1.19.0_2023-05-09|16.0.937.6223|14.5 (Ubuntu 20.04)|
+|TKGm 2.1.0|1.24.9|1.15.0_2023-01-10|16.0.816.19223|14.5 (Ubuntu 20.04)|
+|TKGm 1.6.0|1.23.8|1.11.0_2022-09-13|16.0.312.4243|12.3 (Ubuntu 12.3-1)|
+ ### Wind River
More tests will be added in future releases of Azure Arc-enabled data services.
- To create a directly connected data controller, start with [Prerequisites to deploy the data controller in direct connectivity mode](create-data-controller-direct-prerequisites.md). ++
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6, [4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) |
-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br>TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5+vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
+| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGm 2.3; upstream K8s v1.26.5+vmware.2<br>TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br>TKGm 1.6.0; upstream K8s v1.23.8+vmware.2|
| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes)|[1.24](https://ubuntu.com/kubernetes/docs/1.24/components), [1.28](https://ubuntu.com/kubernetes/docs/1.28/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 |
The conformance tests run as part of the Azure Arc-enabled Kubernetes validation
* Learn about the [Azure Arc agents](conceptual-agent-overview.md) deployed on Kubernetes clusters when connecting them to Azure Arc. ++
azure-cache-for-redis Cache Tutorial Active Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-active-replication.md
+
+ Title: 'Tutorial: Get started using Azure Cache for Redis Enterprise active replication with an AKS-hosted application'
+description: In this tutorial, you learn how to connect your AKS hosted application to a cache that uses active geo-replication.
+++++ Last updated : 09/18/2023
+#CustomerIntent: As a developer, I want to see how to use a Enterprise cache that uses active geo-replication to capture data from two apps running against different caches in separate geo-locations.
+++
+# Get started using Azure Cache for Redis Enterprise active replication with an AKS-hosted application
+
+In this tutorial, you will host an inventory application on Azure Kubernetes Service (AKS) and find out how you can use active geo-replication to replicate data in your Azure Cache for Redis Enterprise instances across Azure regions.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One Azure Kubernetes Service Cluster - For more information on creating a cluster, see [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal). Alternatively, you can host two instances of the demo application on the two different AKS clusters. In a production environment, you would use two different clusters located in the same regions as your clusters to deploy two versions of the application. For this tutorial, you deploy both instances of the application on the same AKS cluster.
+
+> [!IMPORTANT]
+> This tutorial assumes that you are familiar with basic Kubernetes concepts like containers, pods and service.
+
+## Overview
+
+This tutorial uses a sample inventory page that shows three different T-shirt options. The user can "purchase" each T-shirt and see the inventory drop. The unique thing about this demo is that we run the inventory app in two different regions. Typically, you would have to run the database storing inventory data in a single region so that there are no consistency issues. With other database backends and synchronization, customers might have unpleasant experience due to higher latency for calls across different Azure regions. When you use Azure Cache for Redis Enterprise as the backend, you can link two caches together with active geo-replication so that the inventory remains consistent across both regions while enjoying low latency performance from Redis Enterprise in the same region.
+
+## Set up two Azure Cache for Redis instances
+
+1. Create a new Azure Cache for Redis Enterprise instance in **West US 2** region by using the Azure portal or your preferred CLI tool. Alternately, you can use any region of your choice. Use the [quickstart guide](quickstart-create-redis-enterprise.md) to get started.
+
+1. On the **Advanced** tab:
+
+ 1. Enable **Non-TLS access only**.
+ 1. Set **Clustering Policy** to **Enterprise**
+ 1. Configure a new active geo-replication group using [this guide](cache-how-to-active-geo-replication.md). Eventually, you add both caches to the same replication group. Create the group name with the first cache, and add the second cache to the same group.
+
+ > [!IMPORTANT]
+ > This tutorial uses a non-TLS port for demonstration, but we highly recommend that you use a TLS port for anything in production.
+
+1. Set up another Azure Cache for Redis Enterprise in **East US** region with the same configuration as the first cache. Alternatively, you can use any region of your choice. Ensure that you choose the same replication group as the first cache.
+
+## Prepare Kubernetes deployment files
+
+Create two .yml files using the following procedure. One file for each cache you created in the two regions.
+
+To demonstrate data replication across regions, we run two instances of the same application in different regions. Let's make one instance run in Seattle, west namespace, while the second runs in New York, east namespace.
+
+### West namespace
+
+Update the following fields in the following YAML file and save it as _app_west.yaml_.
+
+1. Update the variable `REDIS_HOST` with the **Endpoint value** URL after removing the port suffix: 10000
+1. Update `REDIS_PASSWORD` with the **Access key** of your _West US 2_ cache.
+1. Update `APP_LOCATION` to display the region where this application instance is running. For this cache, configure the `APP_LOCATION` to `Seattle` to indicate this application instance is running in Seattle.
+1. Verify that the variable `namespace` value is `west` in both places in the file.
+
+It should look like following code:
+
+```YAML
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: shoppingcart-app
+ namespace: west
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: shoppingcart
+ template:
+ metadata:
+ labels:
+ app: shoppingcart
+ spec:
+ containers:
+ - name: demoapp
+ image: mcr.microsoft.com/azure-redis-cache/redisactivereplicationdemo:latest
+ resources:
+ limits:
+ cpu: "0.5"
+ memory: "250Mi"
+ requests:
+ cpu: "0.5"
+ memory: "128Mi"
+ env:
+ - name: REDIS_HOST
+ value: "DemoWest.westus2.redisenterprise.cache.azure.net"
+ - name: REDIS_PASSWORD
+ value: "myaccesskey"
+ - name: REDIS_PORT
+ value: "10000" # redis enterprise port
+ - name: HTTP_PORT
+ value: "8080"
+ - name: APP_LOCATION
+ value: "Seattle, WA"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: shoppingcart-svc
+ namespace: west
+spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 8080
+ selector:
+ app: shoppingcart
+```
+
+### East namespace
+
+Save another copy of the same YAML file as _app_east.yaml_. This time, use the values that correspond with your second cache.
+
+ 1. Update the variable `REDIS_HOST` with the **Endpoint value** after removing the port suffix: 10000
+ 1. Update `REDIS_PASSWORD` with the **Access key** of your _East US_ cache.
+ 1. Update `APP_LOCATION` to display the region where this application instance is running. For this cache, configure the `APP_LOCATION` to _New York_ to indicate this application instance is running in New York.
+ 1. Verify that the variable `namespace` value is `east` in both places in the file.
+
+It should look like following code:
+
+```YAML
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: shoppingcart-app
+ namespace: east
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: shoppingcart
+ template:
+ metadata:
+ labels:
+ app: shoppingcart
+ spec:
+ containers:
+ - name: demoapp
+ image: mcr.microsoft.com/azure-redis-cache/redisactivereplicationdemo:latest
+ resources:
+ limits:
+ cpu: "0.5"
+ memory: "250Mi"
+ requests:
+ cpu: "0.5"
+ memory: "128Mi"
+ env:
+ - name: REDIS_HOST
+ value: "DemoEast.eastus.redisenterprise.cache.azure.net"
+ - name: REDIS_PASSWORD
+ value: "myaccesskey"
+ - name: REDIS_PORT
+ value: "10000" # redis enterprise port
+ - name: HTTP_PORT
+ value: "8080"
+ - name: APP_LOCATION
+ value: "New York, NY"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: shoppingcart-svc
+ namespace: east
+spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 8080
+ selector:
+ app: shoppingcart
+```
+
+## Install and connect to your AKS cluster
+
+In this section, you first install the Kubernetes CLI and then connect to an AKS cluster.
+
+> [!NOTE]
+> An Azure Kubernetes Service Cluster is required for this tutorial. You deploy both instances of the application on the same AKS cluster.
+
+### Install the Kubernetes CLI
+
+Use the Kubernetes CLI, _kubectl, to connect to the Kubernetes cluster from your local computer. If you're running locally, then you can use the following command to install kubectl.
+
+```bash
+az aks install-cli
+```
+
+If you use Azure Cloud Shell, _kubectl_ is already installed, and you can skip this step.
+
+### Connect to your AKS clusters in two regions
+
+Use the portal to copy the resource group and cluster name for your AKS cluster in the West US 2 region. To configure _kubectl_ to connect to your AKS cluster, use the following command with your resource group and cluster name:
+
+```bash
+ az aks get-credentials --resource-group myResourceGroup --name myClusterName
+ ```
+
+Verify that you're able to connect to your cluster by running the following command:
+
+```bash
+
+kubectl get nodes
+```
+
+You should see similar output showing the list of your cluster nodes.
+
+```output
+NAME STATUS ROLES AGE VERSION
+aks-agentpool-21274953-vmss000001 Ready agent 1d v1.24.15
+aks-agentpool-21274953-vmss000003 Ready agent 1d v1.24.15
+aks-agentpool-21274953-vmss000006 Ready agent 1d v1.24.15
+```
+
+## Deploy and test your application
+
+You need two namespaces for your applications to run on your AKS cluster. Create a west and then deploy the application.
+
+Run the following command to deploy the application instance to your AKS cluster in the _west_ namespace:
+
+```bash
+kubectl create namespace west
+
+kubectl apply -f app_west.yaml
+```
+
+You get a response indicating your deployment and service was created:
+
+```output
+deployment.apps/shoppingcart-app created
+service/shoppingcart-svc created
+```
+
+To test the application, run the following command to check if the pod is running:
+
+```bash
+kubectl get pods -n west
+```
+
+You see your pod running successfully like:
+
+```output
+NAME READY STATUS RESTARTS AGE
+shoppingcart-app-5fffdcb5cd-48bl5 1/1 Running 0 68s
+```
+
+Run the following command to get the endpoint for your application:
+
+```bash
+kubectl get service -n west
+```
+
+You might see that the EXTERNAL-IP has status `<pending>` for a few minutes. Keep retrying until the status is replaced by an IP address.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+shoppingcart-svc LoadBalancer 10.0.166.147 20.69.136.105 80:30390/TCP 90s
+```
+
+Once the External-IP is available, open a web browser to the External-IP address of your service and you see the application running as follows:
+
+<!-- screenshot for Seattle -->
+
+Run the same deployment steps and deploy an instance of the demo application to run in East US region.
+
+```bash
+kubectl create namespace east
+
+kubectl apply -f app_east.yml
+
+kubectl get pods -n east
+
+kubectl get service -n east
+```
+
+With two services opened in your browser, you should see that changing the inventory in one region is almost instantly reflected in the other region. The inventory data is stored in the Redis Enterprise instances that are replicating data across regions.
+
+You did it! Click on the buttons and explore the demo. To reset the count, add `/reset` after the url:
+
+ `<IP address>/reset`
+
+## Clean up your deployment
+
+To clean up your cluster, run the following commands:
+
+```bash
+kubectl delete deployment shoppingcart-app -n west
+kubectl delete service shoppingcart-svc -n west
+
+kubectl delete deployment shoppingcart-app -n east
+kubectl delete service shoppingcart-svc -n east
+```
++
+## Related Content
+
+- [Tutorial: Connect to Azure Cache for Redis from your application hosted on Azure Kubernetes Service](cache-tutorial-aks-get-started.md)
+- [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal)
+- [AKS sample voting application](https://github.com/Azure-Samples/azure-voting-app-redis/tree/master)
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
This table summarizes the `host.json` values that are used for the _target execu
| Service Bus (Functions v2.x+, Single Dispatch) | extensions.serviceBus.messageHandlerOptions.maxConcurrentCalls | 16 | | Service Bus (Functions v2.x+, Single Dispatch Sessions Based) | extensions.serviceBus.sessionHandlerOptions.maxConcurrentSessions | 2000 | | Service Bus (Functions v2.x+, Batch Processing) | extensions.serviceBus.batchOptions.maxMessageCount | 1000 |
-| Event Hubs (Extension v5.x+) | extensions.eventHubs.maxEventBatchSize | 10 |
+| Event Hubs (Extension v5.x+) | extensions.eventHubs.maxEventBatchSize | 100<sup>1</sup> |
| Event Hubs (Extension v3.x+) | extensions.eventHubs.eventProcessorOptions.maxBatchSize | 10 | | Event Hubs (if defined) | extensions.eventHubs.targetUnprocessedEventThreshold | n/a | | Storage Queue | extensions.queues.batchSize | 16 |
+<sup>1</sup> The default `maxEventBatchSize` changed in [v6.0.0](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs/6.0.0) of the `Microsoft.Azure.WebJobs.Extensions.EventHubs` package. In earlier versions, this was 10.
+ For Azure Cosmos DB _target executions per instance_ is set in the function attribute: | Extension | Function trigger setting | Default Value |
Modify the `host.json` setting `maxEventBatchSize` to set _target executions per
"version": "2.0", "extensions": { "eventHubs": {
- "maxEventBatchSize" : 10
+ "maxEventBatchSize" : 100
} } }
When defined in `host.json`, `targetUnprocessedEventThreshold` is used as _targe
"version": "2.0", "extensions": { "eventHubs": {
- "targetUnprocessedEventThreshold": 23
+ "targetUnprocessedEventThreshold": 153
} } }
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
For more information, see the [Geolocation service] documentation.
### Render service
-[Render V2 service] introduces a new version of the [Get Map Tile V2 API] that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render V2 service instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render V2 service, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
+[Render service] introduces a new version of the [Get Map Tile] API that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
+
+> [!NOTE]
+>
+> **Azure Maps Render v1 service retirement**
+>
+> The Azure Maps [Render v1] service is now deprecated and will be retired on 9/17/26. To avoid service disruptions, all calls to Render v1 API will need to be updated to use [Render v2] API by 9/17/26.
### Route service
The Weather service offers API to retrieve weather information for a particular
Developers can use the [Get Weather along route API] to retrieve weather information along a particular route. Also, the service supports the generation of weather notifications for waypoints affected by weather hazards, such as flooding or heavy rain.
-The [Get Map Tile V2 API] allows you to request past, current, and future radar and satellite tiles.
+The [Get Map Tile] API allows you to request past, current, and future radar and satellite tiles.
![Example of map with real-time weather radar tiles](media/about-azure-maps/intro_weather.png)
Stay up to date on Azure Maps:
<! REST API Links > [Data service]: /rest/api/maps/data-v2 [Geolocation service]: /rest/api/maps/geolocation
-[Get Map Tile V2 API]: /rest/api/maps/render-v2/get-map-tile
+[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
[Get Weather along route API]: /rest/api/maps/weather/getweatheralongroute
-[Render V2 service]: /rest/api/maps/render-v2
+[Render service]: /rest/api/maps/render-v2
[REST APIs]: /rest/api/maps/ [Route service]: /rest/api/maps/route [Search service]: /rest/api/maps/search
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
Azure Maps adheres to the [Mapbox Vector Tile Specification], an open standard.
- [Road tiles] - [Traffic incidents] - [Traffic flow]-- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API]
+- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render - Get Map Tile] API
> [!TIP] > When using vector or raster image tiles from the Azure Maps render service with the web SDK, you can replace `atlas.microsoft.com` with the placeholder `azmapsdomain.invalid`. This placeholder will be replaced with the same domain used by the map and will automatically append the same authentication details as well. This greatly simplifies authentication with the render service when using Azure Active Directory authentication.
See the following articles for more code samples to add to your maps:
[Road tiles]: /rest/api/maps/render-v2/get-map-tile [Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile [Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile
-[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
<! External Links > [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec
azure-maps Create Data Source Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md
Azure Maps adheres to the [Mapbox Vector Tile Specification], an open standard.
- [Road tiles] - [Traffic incidents] - [Traffic flow]-- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API]
+- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render - Get Map Tile] API
> [!TIP] > When using vector or raster image tiles from the Azure Maps render service with the iOS SDK, you can replace `atlas.microsoft.com` with the `AzureMap`'s property' `domainPlaceholder`. This placeholder will be replaced with the same domain used by the map and will automatically append the same authentication details as well. This greatly simplifies authentication with the render service when using Azure Active Directory authentication.
See the following articles for more code samples to add to your maps:
[Road tiles]: /rest/api/maps/render-v2/get-map-tile [Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile [Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile
-[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
<! External Links > [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
Azure Maps adheres to the [Mapbox Vector Tile Specification], an open standard.
* [Road tiles] * [Traffic incidents] * [Traffic flow]
-* Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API]
+* Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render - Get Map Tile] API
> [!TIP] > When using vector or raster image tiles from the Azure Maps render service with the web SDK, you can replace `atlas.microsoft.com` with the placeholder `{azMapsDomain}`. This placeholder will be replaced with the same domain used by the map and will automatically append the same authentication details as well. This greatly simplifies authentication with the render service when using Azure Active Directory authentication.
See the following articles for more code samples to add to your maps:
[Line layer]: map-add-line-layer.md [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec [Polygon layer]: map-add-shape.md
-[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
[Road tiles]: /rest/api/maps/render-v2/get-map-tile [SourceManager]: /javascript/api/azure-maps-control/atlas.sourcemanager [Symbol layer]: map-add-pin.md
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
For more information, see the [Indoor maps wayfinding service] how-to article.
## Using indoor maps
-### Render V2-Get Map Tile API
+### Render - Get Map Tile API
-The Azure Maps [Render V2-Get Map Tile API] has been extended to support Creator tilesets.
+The Azure Maps [Render - Get Map Tile] API has been extended to support Creator tilesets.
-Applications can use the Render V2-Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render V2 service, see [Indoor Maps Module].
+Applications can use the Render - Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render service, see [Indoor Maps Module].
### Web Feature service API
As you begin to develop solutions for indoor maps, you can discover ways to inte
You can use the Azure Maps Creator List, Update, and Delete API to list, update, and delete your datasets, tilesets, and feature statesets. >[!NOTE]
->When you review a list of items to determine whether to delete them, consider the impact of that deletion on all dependent API or applications. For example, if you delete a tileset that's being used by an application by means of the [Render V2-Get Map Tile API], the application fails to render that tileset.
+>When you review a list of items to determine whether to delete them, consider the impact of that deletion on all dependent API or applications. For example, if you delete a tileset that's being used by an application by means of the [Render - Get Map Tile] API, the application fails to render that tileset.
### Example: Updating a dataset
The following example shows how to update a dataset, create a new tileset, and d
[Data maintenance]: #data-maintenance [feature statesets]: #feature-statesets [Indoor Maps module]: #indoor-maps-module
-[Render service]: #render-v2-get-map-tile-api
+[Render service]: #renderget-map-tile-api
[tilesets]: #tilesets [Upload a drawing package]: #upload-a-drawing-package
The following example shows how to update a dataset, create a new tileset, and d
[Feature State service]: /rest/api/maps/v2/feature-state [Feature State Update API]: /rest/api/maps/v2/feature-state/update-states [Geofence service]: /rest/api/maps/spatial/postgeofence
-[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile
[routeset]: /rest/api/maps/2023-03-01-preview/routeset [Style - Create]: /rest/api/maps/2023-03-01-preview/style/create [style]: /rest/api/maps/2023-03-01-preview/style
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
To render a polygon with color and opacity:
5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP
- https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500&center=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063
+ https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500&center=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063
&subscription-key={Your-Azure-Maps-Subscription-key} ```
To render a circle and pushpins with custom labels:
5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP
- https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
``` 6. Select **Send**.
To render a circle and pushpins with custom labels:
8. Next, change the color of the pushpins by modifying the `co` style modifier. If you look at the value of the `pins` parameter (`pins=default|la15+50|al0.66|lc003C62|co002D62|`), notice that the current color is `#002D62`. To change the color to `#41d42a`, replace `#002D62` with `#41d42a`. Now the `pins` parameter is `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. The request looks like the following URL: ```HTTP
- https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co41D42A||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co41D42A||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
``` 9. Select **Send**.
azure-maps How To Show Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md
Title: Show the correct map copyright attribution information
-description: The map copyright attribution information must be displayed in all applications that use the Render V2 API, including web and mobile applications. This article discusses how to display the correct attribution every time you display or update a tile.
+description: The map copyright attribution information must be displayed in all applications that use the Render API, including web and mobile applications. This article discusses how to display the correct attribution every time you display or update a tile.
Last updated 3/16/2022
# Show the correct copyright attribution
-When using the Azure Maps [Render V2 service], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
+When using the Azure Maps [Render service], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
-The above image is an example of a map from the Render V2 service, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
+The above image is an example of a map from the Render service, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
-The above image is an example of a map from the Render V2 service, displaying the satellite style. note that there's another data provider listed.
+The above image is an example of a map from the Render service, displaying the satellite style. note that there's another data provider listed.
## The Get Map Attribution API
The [Get Map Attribution API] enables you to request map copyright attribution i
### When to use the Get Map Attribution API
-The map copyright attribution information must be displayed on the map in any applications that use the Render V2 API, including web and mobile applications.
+The map copyright attribution information must be displayed on the map in any applications that use the Render API, including web and mobile applications.
The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs, including the [Web], [Android] and [iOS] SDKs.
Since the data providers can differ depending on the *region* and *zoom* level,
You need the following information to run the `attribution` command:
-| Parameter | Type | Description |
-| -- | | -- |
-| api-version | string | Version number of Azure Maps API. Current version is 2.1 |
+| Parameter | Type | Description |
+| -- | | |
+| api-version | string | Version number of Azure Maps API. |
| bounds | array | A string that represents the rectangular area of a bounding box. The bounds parameter is defined by the four bounding box coordinates. The first 2 are the WGS84 longitude and latitude defining the southwest corner and the last 2 are the WGS84 longitude and latitude defining the northeast corner. The string is presented in the following format: [SouthwestCorner_Longitude, SouthwestCorner_Latitude, NortheastCorner_Longitude, NortheastCorner_Latitude]. | | tilesetId | TilesetID | A tileset is a collection of raster or vector data broken up into a uniform grid of square tiles at preset zoom levels. Every tileset has a tilesetId to use when making requests. The tilesetId for tilesets created using Azure Maps Creator are generated through the [Tileset Create API]. There are ready-to-use tilesets supplied by Azure Maps, such as `microsoft.base.road`, `microsoft.base.hybrid` and `microsoft.weather.radar.main`, a complete list can be found the [Get Map Attribution] REST API documentation. | | zoom | integer | Zoom level for the selected tile. The valid range depends on the tile, see the [TilesetID] table for valid values for a specific tileset. For more information, see the [Zoom levels and tile grid] article. |
https://atlas.microsoft.com/map/attribution?subscription-key={Your-Azure-Maps-Su
## Additional information
-* For more information, see the [Render V2 service] documentation.
+* For more information, see the [Render service] documentation.
[Android]: how-to-use-android-map-control-library.md [Authentication with Azure Maps]: azure-maps-authentication.md [Get Map Attribution API]: /rest/api/maps/render-v2/get-map-attribution [Get Map Attribution]: /rest/api/maps/render-v2/get-map-attribution#tilesetid [iOS]: how-to-use-ios-map-control-library.md
-[Render V2 service]: /rest/api/maps/render-v2
+[Render service]: /rest/api/maps/render-v2
[Tileset Create API]: /rest/api/maps/v2/tileset/create [TilesetID]: /rest/api/maps/render-v2/get-map-attribution#tilesetid [Web]: how-to-use-map-control.md
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Learn more about migrating from Bing Maps to Azure Maps.
[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes [Pushpin clustering]: #pushpin-clustering [Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
-[road tiles]: /rest/api/maps/render/getmaptile
+[road tiles]: /rest/api/maps/render-v2/get-map-tile
[satellite tiles]: /rest/api/maps/render/getmapimagerytile [Setting the map view]: #setting-the-map-view [Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Learn more about the Azure Maps REST services.
[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md [Map image render]: /rest/api/maps/render/getmapimagerytile [Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Map Tiles]: /rest/api/maps/render/getmaptile
+[Map Tiles]: /rest/api/maps/render-v2/get-map-tile
[nearby search]: /rest/api/maps/search/getsearchnearby [NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite [POI category search]: /rest/api/maps/search/get-search-poi-category
Learn more about the Azure Maps REST services.
[POST Route directions]: /rest/api/maps/route/postroutedirections [quadtree tile pyramid math]: zoom-levels-and-tile-grid.md [Render custom data on a raster map]: how-to-render-custom-data.md
-[Render]: /rest/api/maps/render/getmapimage
+[Render]: /rest/api/maps/render-v2/get-map-static-image
[Route directions]: /rest/api/maps/route/getroutedirections [Route Matrix]: /rest/api/maps/route/postroutematrixpreview [Route Range]: /rest/api/maps/route/getrouterange
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Learn more about migrating to Azure Maps:
[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes [Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
-[road tiles]: /rest/api/maps/render/getmaptile
+[road tiles]: /rest/api/maps/render-v2/get-map-tile
[satellite tiles]: /rest/api/maps/render/getmapimagerytile [Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui [Search for points of interest]: map-search-location.md
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Learn more about Azure Maps REST
[manage authentication in Azure Maps]: how-to-manage-authentication.md [Map image render]: /rest/api/maps/render/getmapimagerytile [Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Map tile]: /rest/api/maps/render/getmaptile
+[Map tile]: /rest/api/maps/render-v2/get-map-tile
[Nearby search]: /rest/api/maps/search/getsearchnearby [npm package]: https://www.npmjs.com/package/azure-maps-rest [NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit [POI category search]: /rest/api/maps/search/getsearchpoicategory [POI search]: /rest/api/maps/search/getsearchpoi [Render custom data on a raster map]: how-to-render-custom-data.md
-[Render]: /rest/api/maps/render/getmapimage
+[Render]: /rest/api/maps/render-v2/get-map-static-image
[Reverse geocode a coordinate]: #reverse-geocode-a-coordinate [Route Matrix]: /rest/api/maps/route/postroutematrixpreview [Route]: /rest/api/maps/route
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
description: Render coverage tables list the countries/regions that support Azure Maps road tiles. Previously updated : 03/23/2022 Last updated : 09/21/2023
# Azure Maps render coverage
-The render coverage tables below list the countries/regions that support Azure Maps road tiles. Both raster and vector tiles are supported. At the lowest resolution, the entire world fits in a single tile. At the highest resolution, a single tile represents 38 square meters. You'll see more details about continents, regions, cities, and individual streets as you zoom in the map. For more information about tiles, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+The render coverage tables below list the countries/regions that support Azure Maps road tiles. Both raster and vector tiles are supported. At the lowest resolution, the entire world fits in a single tile. At the highest resolution, a single tile represents 38 square meters. You'll see more details about continents, regions, cities, and individual streets as you zoom in the map. For more information about tiles, see [Zoom levels and tile grid].
+
+> [!NOTE]
+>
+> **Azure Maps Render v1 service retirement**
+>
+> The Azure Maps [Render v1] service is now deprecated and will be retired on 9/17/26. To avoid service disruptions, all calls to Render v1 API will need to be updated to use [Render v2] API by 9/17/26.
### Legend
-| Symbol | Meaning |
-|--|-|
-| Γ£ô | Country/region is provided with detailed data. |
-| Γùæ | Country/region is provided with simplified data. |
-| Country/region is missing | Country/region data isn't provided. |
+| Symbol | Meaning |
+|::|--|
+| Γ£ô | Country/region is provided with detailed data. |
+| Γùæ | Country/region is provided with simplified data. |
+| v2 | Country/region is only supported in the Render v2 service. |
+| Country/region is missing | Country/region data isn't provided. |
## Americas
The render coverage tables below list the countries/regions that support Azure M
| Brunei | Γ£ô | | Cambodia | Γ£ô | | Guam | Γ£ô |
+| China | v2 |
| Hong Kong Special Administrative Region | Γ£ô | | India | Γ£ô | | Indonesia | Γ£ô |
+| Japan | v2 |
| Laos | Γ£ô | | Macao Special Administrative Region | Γ£ô | | Malaysia | Γ£ô |
The render coverage tables below list the countries/regions that support Azure M
> [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) > [!div class="nextstepaction"]
-> [Get map tiles](/rest/api/maps/render/getmaptile)
+> [Get map tiles](/rest/api/maps/render-v2/get-map-tile)
> [!div class="nextstepaction"] > [Azure Maps routing coverage](routing-coverage.md)+
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
+[Render v1]: /rest/api/maps/render
+[Render v2]: /rest/api/maps/render-v2
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
Learn about how to set a map style in Azure Maps:
> [!div class="nextstepaction"] > [Choose a map style]
-[Map image]: /rest/api/maps/render/getmapimage
-[Map tile]: /rest/api/maps/render/getmaptile
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[Map image]: /rest/api/maps/render-v2/get-map-static-image
+[Map tile]: /rest/api/maps/render-v2/get-map-tile
[Satellite tile]: /rest/api/maps/render/getmapimagerytilepreview [Choose a map style]: choose-map-style.md
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
In this tutorial, you will:
* An [Azure Maps account] * A [subscription key]
+- An [Azure storage account]
> [!NOTE] > For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
for loc in range(len(searchPolyResponse["results"])):
reachableLocations.append(location) ```
-## Upload the reachable range and charging points to Azure Maps Data service
+## Upload the reachable range and charging points
-It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. To do so, upload the boundary data and charging stations data as geojson objects to Azure Maps Data service. Use the [Data Upload API].
+It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. Follow the steps outlined in the [How to create data registry] article to upload the boundary data and charging stations data as geojson objects to your [Azure storage account] then register them in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geojson objects you uploaded into your Azure storage account from your source code.
To upload the boundary and charging point data to Azure Maps Data service, run the following two cells:
poiUdid = getPoiUdid["udid"]
## Render the charging stations and reachable range on a map
-After you've uploaded the data to the data service, call the Azure Maps [Get Map Image service]. This service is used to render the charging points and maximum reachable boundary on the static map image by running the following script:
+After you've uploaded the data to the Azure storage account, call the Azure Maps [Get Map Image service]. This service is used to render the charging points and maximum reachable boundary on the static map image by running the following script:
```python # Get boundaries for the bounding box.
pins = "custom|an15 53||udid-{}||https://raw.githubusercontent.com/Azure-Samples
encodedPins = urllib.parse.quote(pins, safe='') # Render the range and electric vehicle charging points on the map.
-staticMapResponse = await session.get("https://atlas.microsoft.com/map/static/png?api-version=1.0&subscription-key={}&pins={}&path={}&bbox={}&zoom=12".format(subscriptionKey,encodedPins,path,str(minLon)+", "+str(minLat)+", "+str(maxLon)+", "+str(maxLat)))
+staticMapResponse = await session.get("https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&subscription-key={}&pins={}&path={}&bbox={}&zoom=12".format(subscriptionKey,encodedPins,path,str(minLon)+", "+str(minLat)+", "+str(maxLon)+", "+str(maxLat)))
poiRangeMap = await staticMapResponse.content.read()
routeData = {
## Visualize the route
-To help visualize the route, you first upload the route data as a geojson object to Azure Maps Data service. To do so, use the Azure Maps [Data Upload API]. Then, call the rendering service, [Get Map Image API]), to render the route on the map, and visualize it.
+To help visualize the route, follow the steps outlined in the [How to create data registry] article to upload the route data as a geojson object to your [Azure storage account] then register it in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geojson objects you uploaded into your Azure storage account from your source code. Then, call the rendering service, [Get Map Image API], to render the route on the map, and visualize it.
To get an image for the rendered route on the map, run the following script:
minLat -= latBuffer
maxLat += latBuffer # Render the route on the map.
-staticMapResponse = await session.get("https://atlas.microsoft.com/map/static/png?api-version=1.0&subscription-key={}&&path={}&pins={}&bbox={}&zoom=16".format(subscriptionKey,path,pins,str(minLon)+", "+str(minLat)+", "+str(maxLon)+", "+str(maxLat)))
+staticMapResponse = await session.get("https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&subscription-key={}&&path={}&pins={}&bbox={}&zoom=16".format(subscriptionKey,path,pins,str(minLon)+", "+str(minLat)+", "+str(maxLon)+", "+str(maxLat)))
staticMapImage = await staticMapResponse.content.read()
To explore the Azure Maps APIs that are used in this tutorial, see:
* [Get Route Range] * [Post Search Inside Geometry]
-* [Data Upload]
* [Render - Get Map Image] * [Post Route Matrix] * [Get Route Directions]
To learn more about Azure Notebooks, see
[Azure Notebooks]: https://notebooks.azure.com [Data Upload API]: /rest/api/maps/data-v2/upload [Data Upload]: /rest/api/maps/data-v2/upload
-[Get Map Image API]: /rest/api/maps/render/getmapimage
-[Get Map Image service]: /rest/api/maps/render/getmapimage
+[Get Map Image API]: /rest/api/maps/render-v2/get-map-static-image
+[Get Map Image service]: /rest/api/maps/render-v2/get-map-static-image
[Get Route Directions API]: /rest/api/maps/route/getroutedirections [Get Route Directions]: /rest/api/maps/route/getroutedirections [Get Route Range API]: /rest/api/maps/route/getrouterange
To learn more about Azure Notebooks, see
[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry [Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry [Quickstart: Sign in and set a user ID]: https://notebooks.azure.com
-[Render - Get Map Image]: /rest/api/maps/render/getmapimage
+[Render - Get Map Image]: /rest/api/maps/render-v2/get-map-static-image
[*requirements.txt*]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt [routing APIs]: /rest/api/maps/route [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Title: 'Tutorial: Create a geofence and track devices on a Microsoft Azure Map'
description: Tutorial on how to set up a geofence. See how to track devices relative to the geofence by using the Azure Maps Spatial service Previously updated : 02/28/2021 Last updated : 09/14/2023
Azure Maps provides services to support the tracking of equipment entering and e
> [!div class="checklist"] > > * Create an Azure Maps account with a global region.
-> * Upload [Geofencing GeoJSON data] that defines the construction site areas you want to monitor. You'll use the [Data Upload API] to upload geofences as polygon coordinates to your Azure Maps account.
+> * Upload [Geofencing GeoJSON data] that defines the construction site areas you want to monitor. You'll upload geofences as polygon coordinates to your Azure storage account, then use the [data registry] service to register that data with your Azure Maps account.
> * Set up two [logic apps] that, when triggered, send email notifications to the construction site operations manager when equipment enters and exits the geofence area. > * Use [Azure Event Grid] to subscribe to enter and exit events for your Azure Maps geofence. You set up two webhook event subscriptions that call the HTTP endpoints defined in your two logic apps. The logic apps then send the appropriate email notifications of equipment moving beyond or entering the geofence. > * Use [Search Geofence Get API] to receive notifications when a piece of equipment exits and enters the geofence areas.
The Azure CLI command [az maps account create] doesnΓÇÖt have a location propert
This tutorial demonstrates how to upload geofencing GeoJSON data that contains a `FeatureCollection`. The `FeatureCollection` contains two geofences that define polygonal areas within the construction site. The first geofence has no time expiration or restrictions. The second can only be queried against during business hours (9:00 AM-5:00 PM in the Pacific Time zone), and will no longer be valid after January 1, 2022. For more information on the GeoJSON format, see [Geofencing GeoJSON data].
->[!TIP]
->You can update your geofencing data at any time. For more information, see [Data Upload API].
-
-To upload the geofencing GeoJSON data:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *POST GeoJSON Data Upload*.
-
-4. Select the **POST** HTTP method.
-
-5. Enter the following URL. The request should look like the following URL:
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
- ```
-
- The `geojson` parameter in the URL path represents the data format of the data being uploaded.
-
-6. Select the **Body** tab.
-
-7. In the dropdown lists, select **raw** and **JSON**.
-
-8. Copy the following GeoJSON data, and then paste it in the **Body** window:
+Create the geofence JSON file using the following geofence data. You'll upload this file into your Azure storage account next.
```JSON {
To upload the geofencing GeoJSON data:
} ```
-9. Select **Send**.
-
-10. In the response window, select the **Headers** tab.
-
-11. Copy the value of the **Operation-Location** key, which is the `status URL`. The `status URL` is used to check the status of the GeoJSON data upload.
-
- ```http
- https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0
- ```
-
-### Check the GeoJSON data upload status
-
-To check the status of the GeoJSON data and retrieve its unique ID (`udid`):
-
-1. Select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data]. The request should look like the following URL:
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab.
-
-8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Save the `udid` to query the Get Geofence API in the last section of this tutorial.
-
- :::image type="content" source="./media/tutorial-geofence/resource-location-url.png" alt-text="Copy the resource location URL.":::
-
-### (Optional) Retrieve GeoJSON data metadata
+Follow the steps outlined in the [How to create data registry] article to upload the geofence JSON file into your Azure storage account then register it in your Azure Maps account.
-You can retrieve metadata from the uploaded data. The metadata contains information like the resource location URL, creation date, updated date, size, and upload status.
-
-To retrieve content metadata:
-
-1. Select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *GET Data Upload Metadata*.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status]. The request should look like the following URL:
-
- ```http
- https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. In the response window, select the **Body** tab. The metadata should like the following JSON fragment:
-
- ```json
- {
- "udid": "{udid}",
- "location": "https://us.atlas.microsoft.com/mapData/6ebf1ae1-2a66-760b-e28c-b9381fcff335?api-version=2.0",
- "created": "5/18/2021 8:10:32 PM +00:00",
- "updated": "5/18/2021 8:10:37 PM +00:00",
- "sizeInBytes": 946901,
- "uploadStatus": "Completed"
- }
- ```
+> [!IMPORTANT]
+> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geofence you uploaded into your Azure storage account from your source code and HTTP requests.
## Create workflows in Azure Logic Apps
There are no resources that require cleanup.
> [!div class="nextstepaction"] > [Handle content types in Azure Logic Apps]
-[Geofencing GeoJSON data]: geofence-geojson.md
-[Data Upload API]: /rest/api/maps/data-v2/upload
-[logic apps]: ../event-grid/handler-webhooks.md#logic-apps
+[az maps account create]: /cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create
[Azure Event Grid]: ../event-grid/overview.md
-[Search Geofence Get API]: /rest/api/maps/spatial/getgeofence
-[Postman]: https://www.postman.com
+[Azure portal]: https://portal.azure.com
[Create your Azure Maps account using an ARM template]: how-to-create-template.md
-[az maps account create]: /cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create
-[Upload Geofencing GeoJSON data]: #upload-geofencing-geojson-data
-[Check the GeoJSON data upload status]: #check-the-geojson-data-upload-status
+[data registry]: /rest/api/maps/data-registry
+[Geofencing GeoJSON data]: geofence-geojson.md
+[Handle content types in Azure Logic Apps]: ../logic-apps/logic-apps-content-type.md
+[How to create data registry]: how-to-create-data-registries.md
[logic app]: ../event-grid/handler-webhooks.md#logic-apps
-[Azure portal]: https://portal.azure.com
-[Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md
-[three event types]: ../event-grid/event-schema-azure-maps.md
-[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence
-[Upload Geofencing GeoJSON data section]: #upload-geofencing-geojson-data
+[logic apps]: ../event-grid/handler-webhooks.md#logic-apps
+[Postman]: https://www.postman.com
+[Search Geofence Get API]: /rest/api/maps/spatial/getgeofence
[Send email notifications using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md
+[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence
[Supported Events Handlers in Event Grid]: ../event-grid/event-handlers.md
-[Handle content types in Azure Logic Apps]: ../logic-apps/logic-apps-content-type.md
+[three event types]: ../event-grid/event-schema-azure-maps.md
+[Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md
+[Upload Geofencing GeoJSON data section]: #upload-geofencing-geojson-data
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
description: Tutorial on how to Integrate IoT Hub with Microsoft Azure Maps service APIs Previously updated : 10/28/2021 Last updated : 09/14/2023 - -
-#Customer intent: As a customer, I want to build an IoT system so that I can use Azure Maps APIs for spatial analytics on the device data.
# Tutorial: Implement IoT spatial analytics by using Azure Maps
In this tutorial you will:
> [!div class="checklist"] > > * Create an Azure storage account to log car tracking data.
-> * Upload a geofence to the Azure Maps Data service using the Data Upload API.
+> * Upload a geofence to an Azure storage account.
> * Create a hub in Azure IoT Hub, and register a device. > * Create a function in Azure Functions, implementing business logic based on Azure Maps spatial analytics. > * Subscribe to IoT device telemetry events from the Azure function via Azure Event Grid.
When you successfully create your storage account, you then need to create a con
:::image type="content" source="./media/tutorial-iot-hub-maps/access-keys.png" alt-text="Screenshot of copy storage account name and key.":::
-## Upload a geofence
-
-Next, use the [Postman] app to [upload the geofence] to Azure Maps. The geofence defines the authorized geographical area for our rental vehicle. Use the geofence in your Azure function to determine whether a car has moved outside the geofence area.
-
-Follow these steps to upload the geofence by using the Azure Maps Data Upload API:
-
-1. Open the Postman app, select **New** again. In the **Create New** window, select **HTTP Request**, and enter a request name for the request.
-
-2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API.
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
- ```
-
- In the URL path, the `geojson` value against the `dataFormat` parameter represents the format of the data being uploaded.
-
-3. Select **Body** > **raw** for the input format, and choose **JSON** from the drop-down list. [Open the JSON data file], and copy the JSON into the body section.
-
-4. Select **Send** and wait for the request to process. After the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
-
- ```http
- https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0
- ```
-
-5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. Add your subscription key to the URL for authentication. The **GET** request should like the following URL:
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData/{operationId}/status?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
+## Upload a geofence into your Azure storage account
-6. When the request completes successfully, select the **Headers** tab in the response window. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Copy the `udid` for later use in this tutorial.
+The geofence defines the authorized geographical area for our rental vehicle. Use the geofence in your Azure function to determine whether a car has moved outside the geofence area.
- :::image type="content" source="./media/tutorial-iot-hub-maps/resource-location-url.png" alt-text="Copy the resource location URL.":::
+Follow the steps outlined in the [How to create data registry] article to upload the [geofence JSON data file] into your Azure storage account then register it in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you'll need it. The `udid` is how you reference the geofence you uploaded into your Azure storage account from your source code. For more information on geofence data files, see [Geofencing GeoJSON data].
## Create an IoT hub
Now, set up your Azure function.
1. In the C# code, replace the following parameters: * Replace **SUBSCRIPTION_KEY** with your Azure Maps account subscription key.
- * Replace **UDID** with the `udid` of the geofence you uploaded in [Upload a geofence].
+ * Replace **UDID** with the `udid` of the geofence you uploaded in [Upload a geofence into your Azure storage account].
* The `CreateBlobAsync` function in the script creates a blob per event in the data storage account. Replace the **ACCESS_KEY**, **ACCOUNT_NAME**, and **STORAGE_CONTAINER_NAME** with your storage account's access key, account name, and data storage container. These values were generated when you created your storage account in [Create an Azure storage account]. 1. In the left menu, select the **Integration** pane. Select **Event Grid Trigger** in the diagram. Type in a name for the trigger, *eventGridEvent*, and select **Create Event Grid subscription**.
To learn more about how to send device-to-cloud telemetry, and the other way aro
[general-purpose v2 storage account]: ../storage/common/storage-account-overview.md [Get Geofence]: /rest/api/maps/spatial/getgeofence [Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
+[How to create data registry]: how-to-create-data-registries.md
[IoT Hub message routing]: ../iot-hub/iot-hub-devguide-routing-query-syntax.md [IoT Plug and Play]: ../iot-develop/index.yml
-[Open the JSON data file]: https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4
+[geofence JSON data file]: https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4
[Plug and Play schema for geospatial data]: https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md [Postman]: https://www.postman.com/ [register a new device in the IoT hub]: ../iot-hub/iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub
To learn more about how to send device-to-cloud telemetry, and the other way aro
[Send telemetry from a device]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp [Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Upload a geofence]: #upload-a-geofence
-[upload the geofence]: ./geofence-geojson.md
+[Upload a geofence into your Azure storage account]: #upload-a-geofence-into-your-azure-storage-account
+[Geofencing GeoJSON data]: ./geofence-geojson.md
[Use IoT Hub message routing]: ../iot-hub/iot-hub-devguide-messages-d2c.md
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
|--|-|-|-| | [Data v1]<br>[Data v2]<br>[Data registry] | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| | [Geolocation]| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|
-| [Render v1]<br>[Render v2] | Yes, except for Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
+| [Render] | Yes, except for Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
| [Route] | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Search v1]<br>[Search v2] | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Spatial] | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are nonbillable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
The following table summarizes the Azure Maps services that generate transaction
| [Conversion] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) | | [Dataset] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)| | [Feature State] | Yes, except for `FeatureState.CreateStateset`, `FeatureState.DeleteStateset`, `FeatureState.GetStateset`, `FeatureState.ListStatesets`, `FeatureState.UpdateStatesets` | One request = 1 transaction | Azure Maps Creator Feature State (Gen2 pricing) |
-| [Render v2] | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render v2, see Render v2 section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) |
+| [Render] | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render, see Render section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) |
| [Tileset] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) | | [WFS] | Yes| One request = 1 transaction | Azure Maps Creator Web Feature (WFS) (Gen2 pricing) |
The following table summarizes the Azure Maps services that generate transaction
[Geolocation]: /rest/api/maps/geolocation [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md [Pricing calculator]: https://azure.microsoft.com/pricing/calculator/
-[Render v1]: /rest/api/maps/render
-[Render v2]: /rest/api/maps/render-v2
+[Render]: /rest/api/maps/render-v2
[Route]: /rest/api/maps/route [Search v1]: /rest/api/maps/search [Search v2]: /rest/api/maps/search-v2
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
To learn more about Azure Notebooks, see
[Daily Forecast]: /rest/api/maps/weather/getdailyforecast [EV routing using Azure Notebooks]: tutorial-ev-routing.md [free account]: https://azure.microsoft.com/free/
-[Get Map Image service]: /rest/api/maps/render/getmapimage
+[Get Map Image service]: /rest/api/maps/render-v2/get-map-static-image
[manage authentication in Azure Maps]: how-to-manage-authentication.md
-[Render - Get Map Image]: /rest/api/maps/render/getmapimage
+[Render - Get Map Image]: /rest/api/maps/render-v2/get-map-static-image
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Weather Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data [weather_dataset_demo.csv]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data
azure-maps Zoom Levels And Tile Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md
Learn more about geospatial concepts:
[EPSG:3857]: https://epsg.io/3857 [Web SDK: Map pixel and position calculations]: /javascript/api/azure-maps-control/atlas.map#pixelstopositions-pixel [Add a tile layer]: map-add-tile-layer.md
-[Get map tiles]: /rest/api/maps/render/getmaptile
+[Get map tiles]: /rest/api/maps/render-v2/get-map-tile
[Get traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile [Get traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile [Azure Maps glossary]: glossary.md
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
# Track dependencies with OpenCensus Python
+> [!NOTE]
+> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
+ A dependency is an external component that is called by your application. Dependency data is collected using OpenCensus Python and its various integrations. The data is then sent to Application Insights under Azure Monitor as `dependencies` telemetry. First, instrument your Python application with latest [OpenCensus Python SDK](./opencensus-python.md).
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
# Track incoming requests with OpenCensus Python
+> [!NOTE]
+> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
+ OpenCensus Python and its integrations collect incoming request data. You can track incoming request data sent to your web applications built on top of the popular web frameworks Django, Flask, and Pyramid. Application Insights receives the data as `requests` telemetry. First, instrument your Python application with the latest [OpenCensus Python SDK](./opencensus-python.md).
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
# Set up Azure Monitor for your Python application > [!NOTE]
-> OpenTelemetry announced the [sunsetting of OpenCensus](https://opentelemetry.io/blog/2023/sunsetting-opencensus/). Azure continues to support the Python OpenCensus SDK and will not drop support for it without at least one year of advance notification. The [OpenTelemetry-based Python offering](opentelemetry-enable.md?tabs=python) is our current reccomended solution for Python applications.
+> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
Azure Monitor supports distributed tracing, metric collection, and logging of Python applications.
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
# Migrating from OpenCensus Python SDK and Azure Monitor OpenCensus exporter for Python to Azure Monitor OpenTelemetry Python Distro
-[OpenCensus is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus) and the repositories are archived.
+> [!NOTE]
+> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
Follow these steps to migrate Python applications to the [Azure Monitor](../overview.md) [Application Insights](./app-insights-overview.md) [OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python).
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Previously updated : 08/17/2023 Last updated : 09/14/2023 # Monitor Kubernetes clusters using Azure services and cloud native tools
The following table lists the services that are commonly used by the network eng
| Service | Description | |:|:| | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Suite of tools in Azure to monitor the virtual networks used by your Kubernetes clusters and diagnose detected issues. |
+| [Traffic analytics](../../network-watcher/traffic-analytics.md) | Feature of Network Watcher that analyzes flow logs to provide insights into traffic flow. |
| [Network insights](../../network-watcher/network-insights-overview.md) | Feature of Azure Monitor that includes a visual representation of the performance and health of different network components and provides access to the network monitoring tools that are part of Network Watcher. | [Network insights](../../network-watcher/network-insights-overview.md) is enabled by default and requires no configuration. Network Watcher is also typically [enabled by default in each Azure region](../../network-watcher/network-watcher-create.md).
Following are common scenarios for monitoring the network.
- Create [flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md) to log information about the IP traffic flowing through network security groups used by your cluster and then use [traffic analytics](../../network-watcher/traffic-analytics.md) to analyze and provide insights on this data. You'll most likely use the same Log Analytics workspace for traffic analytics that you use for Container insights and your control plane logs. - Using [traffic analytics](../../network-watcher/traffic-analytics.md), you can determine if any traffic is flowing either to or from any unexpected ports used by the cluster and also if any traffic is flowing over public IPs that shouldn't be exposed. Use this information to determine whether your network rules need modification.
+- For AKS clusters, use the [Network Observability add-on for AKS (preview)](https://aka.ms/NetObsAddonDoc) to monitor and observe access between services in the cluster (east-west traffic).
## Platform engineer
The *platform engineer*, also known as the cluster administrator, is responsible
:::image type="content" source="media/monitor-kubernetes/layers-platform-engineer.png" alt-text="Diagram of layers of Kubernetes environment for platform engineer." lightbox="media/monitor-kubernetes/layers-platform-engineer.png" border="false":::
-Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations for the fleet architect are included in the guidance below.
+Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations are included in the guidance below. See [What is Azure Kubernetes Fleet Manager (preview)?](../../kubernetes-fleet/overview.md) for details on creating a Fleet resource for multi-cluster and at-scale scenarios.
### Azure services for platform engineer
The following table lists the Azure services for the platform engineer to monito
| Service | Description | |:|:| | [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. It also collects metrics from the Kubernetes control plane and stores them in the workspace. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). |
-| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully-managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. |
+| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. |
| [Azure Arc-enabled Kubernetes](container-insights-enable-arc-enabled-clusters.md) | Allows you to attach to Kubernetes clusters running in other clouds so that you can manage and configure them in Azure. With the Arc agent installed, you can monitor AKS and hybrid clusters together using the same methods and tools, including Container insights and Prometheus. | | [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. |
See [Default Prometheus metrics configuration in Azure Monitor](../essentials/pr
#### Enable Grafana for analysis of Prometheus data
-[Create an instance of Managed Grafana](../../managed-grafan)
+[Create an instance of Managed Grafana](../../managed-grafan#use-out-of-the-box-dashboards) are available for monitoring Kubernetes clusters including several that present similar information as Container insights views.
If you have an existing Grafana environment, then you can continue to use it and add Azure Monitor managed service for [Prometheus as a data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). You can also [add the Azure Monitor data source to Grafana](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to use data collected by Container insights in custom Grafana dashboards. Perform this configuration if you want to focus on Grafana dashboards rather than using the Container insights views and reports.
-A variety of prebuilt dashboards are available for monitoring Kubernetes clusters including several that present similar information as Container insights views. [Search the available Grafana dashboards templates](https://grafana.com/grafan).
#### Enable Container Insights for collection of logs
-When you enable Container Insights for your Kubernetes cluster, it deploys a containerized version of the [Azure Monitor agent](../agents/..//agents/log-analytics-agent.md) that sends data to a Log Analytics workspace in Azure Monitor. Container insights collects container stdout/stderr, infrastructure logs, and performance data. All log data is stored in a Log Analytics workspace where they can be analyzed using [Kusto Query Language (KQL)](../logs/log-query-overview.md).
+When you enable Container Insights for your Kubernetes cluster, it deploys a containerized version of the [Azure Monitor agent](../agents/log-analytics-agent.md) that sends data to a Log Analytics workspace in Azure Monitor. Container insights collects container stdout/stderr, infrastructure logs, and performance data. All log data is stored in a Log Analytics workspace where they can be analyzed using [Kusto Query Language (KQL)](../logs/log-query-overview.md).
See [Enable Container insights](../containers/container-insights-onboard.md) for prerequisites and configuration options for onboarding your Kubernetes clusters. [Onboard using Azure Policy](container-insights-enable-aks-policy.md) to ensure that all clusters retain a consistent configuration. Once Container insights is enabled for a cluster, perform the following actions to optimize your installation.
+- Container insights collects many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics). You can disable collection of these metrics by configuring Container insights to only collect **Logs and events** as described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#custom-data-collection). This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights.
+- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected.
- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logging-v2.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md).-- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. See [Enable cost optimization settings in Container insights (preview)](../containers/container-insights-cost-config.md) for details. If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
Following are common scenarios for monitoring the cluster level components.
- Under **Reports**, use the **Node Monitoring** workbooks to analyze disk capacity, disk IO, and GPU usage. For more information about these workbooks, see [Node Monitoring workbooks](container-insights-reports.md#node-monitoring-workbooks). - Under **Monitoring**, select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.
-**Network observability (east-west traffic)**
-- For AKS clusters, use the [Network Observability add-on for AKS (preview)](https://aka.ms/NetObsAddonDoc) to monitor and observe access between services in the cluster (east-west traffic).- **Grafana dashboards**<br>-- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus.
+- Use the [prebuilt dashboard](../visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in Managed Grafana for **Kubelet** to see the health and performance of each.
- Use Grafana dashboards with [Prometheus metric values](../essentials/prometheus-metrics-scrape-default.md) related to disk such as `node_disk_io_time_seconds_total` and `windows_logical_disk_free_bytes` to monitor attached storage.
+- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus.
**Log Analytics** - Select the [Containers category](../logs/queries.md?tabs=groupby#find-and-filter-queries) in the [queries dialog](../logs/queries.md#queries-dialog) for your Log Analytics workspace to access prebuilt log queries for your cluster, including the **Image inventory** log query that retrieves data from the [ContainerImageInventory](/azure/azure-monitor/reference/tables/containerimageinventory) table populated by Container insights.
Following are common scenarios for monitoring your managed Kubernetes components
- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks). **Grafana**<br>
+- Use the [prebuilt dashboard](../visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in Managed Grafana for **Kubelet** to see the health and performance of each kubelet.
- Use a dashboard such as [Kubernetes apiserver](https://grafana.com/grafana/dashboards/12006) for a complete view of the API server performance. This includes such values as request latency and workqueue processing time. **Log Analytics**<br>
Following are common scenarios for monitoring your Kubernetes objects and worklo
-- **Container insights**<br> - Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers. - Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md#analyze-nodes-controllers-and-container-health). - Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, see [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md). **Grafana dashboards**<br>
+- Use the [prebuilt dashboards](../visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in Managed Grafana for **Nodes** and **Pods** to view their health and performance.
- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus. + **Live data** - In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md).
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
* [Disabling backups](backup-disable.md) for a volume will delete all the backups stored in the Azure storage for that volume. If you delete a volume, the backups will remain. If you no longer need the backups, you should [manually delete the backups](backup-delete.md).
-* If you need to delete a parent resource group or subscription that contains backups, you should delete any backups first. Deleting the resource group or subscription won't delete the backups. You can remove backups by [disabling backups](backup-disable.md) or [manually deleting the backups](backup-disable.md).
+* If you need to delete a parent resource group or subscription that contains backups, you should delete any backups first. Deleting the resource group or subscription won't delete the backups. You can remove backups by [disabling backups](backup-disable.md) or [manually deleting the backups](backup-disable.md). If you delete the resource group without disabling backups, backups will continue to impact your billing.
## Next steps
azure-portal Get Subscription Tenant Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md
Title: Get subscription and tenant IDs in the Azure portal
-description: To get them
Previously updated : 04/11/2023
+description: Learn how to locate and copy the IDs of Azure tenants and subscriptions.
Last updated : 09/22/2023
Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Confirm that you are signed into the tenant for which you want to retrieve the ID. If not, [switch directories](set-preferences.md#switch-and-manage-directories) so that you're working in the right tenant.
-1. Under the Azure services heading, select **Azure Active Directory**. If you don't see **Azure Active Directory** here, use the search box to find it.
+1. Under the Azure services heading, select **Microsoft Entra ID**. If you don't see **Microsoft Entra ID** here, use the search box to find it.
1. Find the **Tenant ID** in the **Basic information** section of the **Overview** screen. 1. Copy the **Tenant ID** by selecting the **Copy to clipboard** icon shown next to it. You can paste this value into a text document or other location.
Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal
## Next steps -- Learn more about [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md).
+- Learn more about [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md).
- Learn how to manage Azure subscriptions [with Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli) or [with Azure PowerShell](/powershell/azure/manage-subscriptions-azureps). - Learn how to [manage Azure portal settings and preferences](set-preferences.md).
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Azure Backup provides several ways to restore a VM.
**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk. The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> When you choose a Vault-Standard recovery point, a VHD file with the content of the chosen recovery point is also created in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins. **Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to a different subscription within the same tenant as the source subscription (as per the Azure RBAC capabilities) from restore points. <br><br> Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> Works with [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and [Cross Zonal Restore](backup-azure-arm-restore-vms.md#create-a-vm). <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots tier](backup-azure-vms-introduction.md#snapshot-creation) recovery points. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed) and [ADE encrypted VMs](backup-azure-vms-encryption.md#encryption-support-using-ade).
-**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
+**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#zonal-and-zone-redundant-services) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app serve
**Replace existing** | You can restore a disk and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it has been deleted, you can't use this option.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and it stores the snapshot in the staging location that you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>This option is supported for unencrypted managed VMs and for VMs [created from custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or fewer disks than the current VM, the number of disks in the restore point will only reflect the VM configuration.<br><br> This option is also supported for VMs with linked resources, like [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) and [Azure Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | You can use cross-region restore to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Backup admins and app admins have permissions to perform the restore operation on a secondary region. **Cross Subscription** | Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> You can restore Azure Virtual Machines or disks to a different subscription within the same tenant as the source subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) tier recovery points. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed) and [VMs with disks having Azure Encryptions (ADE)](backup-azure-vms-encryption.md#encryption-support-using-ade).
-**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones. You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
+**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones. You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#zonal-and-zone-redundant-services) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
## Support for file-level restore
bastion Connect Vm Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-windows.md
description: Learn how to connect to a VM from a Windows computer by using Basti
Previously updated : 08/08/2023 Last updated : 09/21/2023
The steps in the following sections help you connect to a VM from a Windows nati
Optionally, you can also specify the authentication method as part of the command.
-* **Azure AD authentication:** `--auth-type "AAD"` For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
-
-* **User name and password:** `--auth-type "password" --username "<Username>"`
+* **Azure AD authentication:** For Windows 10 version 20H2+, Windows 11 21H2+, and Windows Server 2022, use `--enable-mfa`. For more information, see [az network bastion rdp - optional parameters](/cli/azure/network/bastion?#az-network-bastion-rdp(bastion)-optional-parameters).
#### Specify a custom port
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
For Linux container workloads, Batch currently supports the following Linux imag
- Offer: `centos-container-rdma` - Offer: `ubuntu-server-container-rdma`
+- Publisher: `microsoft-dsvm`
+ - Offer: `ubuntu-hpc`
+
+#### Notes
+ The docker data root of the above images lies in different places:
+ - For the batch image `microsoft-azure-batch` (Offer: `centos-container-rdma`, etc), the docker data root is mapped to `/mnt/batch/docker`, which is usually located on the temporary disk.
+ - For the HPC image, or `microsoft-dsvm` (Offer: `ubuntu-hpc`, etc), the docker data root is unchanged from the Docker default which is `/var/lib/docker` on Linux and `C:\ProgramData\Docker` on Windows. These folders are usually located on the OS disk.
+
+ When using non-Batch images, the OS disk has the potential risk of being filled up quickly as container images are downloaded.
+#### Potential solutions for customer
+
+Change the docker data root in a start task when creating a pool in BatchExplorer. Here's an example of the Start Task command:
+```csharp
+1) sudo systemctl stop docker
+2) sudo vi /lib/systemd/system/docker.service
+ +++
+ FROM:
+ ExecStart=/usr/bin/docker daemon -H fd://
+ TO:
+ ExecStart=/usr/bin/docker daemon -g /new/path/docker -H fd://
+ +++
+3) sudo systemctl daemon-reload
+4) sudo systemctl start docker
+```
+ These images are only supported for use in Azure Batch pools and are geared for Docker container execution. They feature: - A pre-installed Docker-compatible [Moby container runtime](https://github.com/moby/moby).
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
You can configure and bind your Communication Services and Azure AI services thr
2.3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed. [![Screen shot of enable managed identiy.](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox)
- 2.4. Once the identity is enabled you should see something similar.
+ 2.4. Once the identity is enabled, you should see something similar.
[![Screenshot of enabled identity.](./media/identity-saved.png)](./media/identity-saved.png#lightbox) 3. When managed identity is enabled the Cognitive Service tab should show a button 'Connect cognitive service' to connect the two services.
You can configure and bind your Communication Services and Azure AI services thr
6. Now in the Cognitive Service tab you should see your connected services showing up. [![Screenshot of connected cognitive service on main page.](./media/new-entry-created.png)](./media/new-entry-created.png#lightbox)
+### Manually adding Managed Identity to Azure Communication Services resource
+Alternatively if you would like to go through the manual process of connecting your resources you can follow these steps.
+
+#### Enable system assigned identity
+1. Navigate to your Azure Communication Services resource in the Azure portal.
+2. Select the Identity tab.
+3. Enable system assigned identity. This action begins the creation of the identity. A pop-up notification appears notifying you that the request is being processed.
+[![Screenshot of enable system identity.](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox)
+
+#### Option 1: Add role from Azure Cognitive Services in the Azure portal
+1. Navigate to your Azure Cognitive Services resource.
+2. Select the "Access control (IAM)" tab.
+3. Click the "+ Add" button.
+4. Select "Add role assignments" from the menu.
+[![Screenshot of adding a role assignment.](./media/add-role.png)](./media/add-role.png#lightbox)
+5. Choose the "Cognitive Services User" role to assign, then click "Next."
+[![Screenshot of Cognitive Services User.](./media/cognitive-service-user.png)](media/cognitive-service-user.png#lightbox)
+6. For the field "Assign access to" choose the "User, group or service principal."
+7. Press "+ Select members" and a side tab opens.
+8. Search for your Azure Communication Services resource name in the text box and click it when it shows up, then click "Select."
+[![Screenshot of Azure Communication Services resource side panel.](./media/select-acs-resource.png)](./media/select-acs-resource.png#lightbox)
+9. Click "Review + assign," this assigns the role to the managed identity.
+
+#### Option 2: Add role through Azure Communication Services Identity tab
+1. Navigate to your Azure Communication Services resource in the Azure portal.
+2. Select Identity tab.
+3. Click on "Azure role assignments."
+[![Screenshot of the role assignment screen.](./media/add-role-acs.png)](./media/add-role-acs.png#lightbox)
+4. Click the "Add role assignment (Preview)" button, which opens the "Add role assignment (Preview)" tab.
+5. Select the "Resource group" for "Scope."
+6. Select the "Subscription."
+7. Select the "Resource Group" containing the Cognitive Service.
+8. Select the Role "Cognitive Services User."
+[![Screenshot of filled in role assignment tab.](./media/acs-roles-cognitive-services.png)](./media/acs-roles-cognitive-services.png#lightbox)
+9. Click Save.
+
+Your Azure Communication Service has now been linked to your Azure Cognitive Service resource.
+ ## Azure AI services regions supported This integration between Azure Communication Services and Azure AI services is only supported in the following regions:
This integration between Azure Communication Services and Azure AI services is o
- westeu - uksouth
-## Next Steps
+## Next steps
- Learn about [playing audio](../../concepts/call-automation/play-action.md) to callers using Text-to-Speech. - Learn about [gathering user input](../../concepts/call-automation/recognize-action.md) with Speech-to-Text.
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Use more than one environment when you want two or more applications to:
| Type | Description | Plan | Billing considerations | |--|--|--|--|
-| Workload profile | Run serverless apps with support for scale-to-zero and pay only for resources your apps use with the consumption profile. You can also run apps with customized hardware and increased cost predictability using dedicated workload profiles. | Consumption and Dedicated | You can choose to run apps under either or both plans using seperate workload profiles. The Dedicated plan has a fixed cost for the entire environment regardless of how many workload profiles you're using. |
+| Workload profile | Run serverless apps with support for scale-to-zero and pay only for resources your apps use with the consumption profile. You can also run apps with customized hardware and increased cost predictability using dedicated workload profiles. | Consumption and Dedicated | You can choose to run apps under either or both plans using separate workload profiles. The Dedicated plan has a fixed cost for the entire environment regardless of how many workload profiles you're using. |
| Consumption only | Run serverless apps with support for scale-to-zero and pay only for resources your apps use. | Consumption only | Billed only for individual container apps and their resource usage. There's no cost associated with the Container Apps environment. | ## Logs
container-registry Tutorial Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-cache.md
Artifact Cache currently supports the following upstream registries:
| Upstream registries | Support | Availability | | | | -- |
-| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Docker Hub | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | ECR Public | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
-| Nivida | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+| Nvidia | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
| Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
container-registry Tutorial Enable Artifact Cache Auth Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-auth-cli.md
This article is part five of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md), you learn how to enable Artifact Cache feature by using the Azure CLI. In [part four](tutorial-enable-artifact-cache-auth.md), you learn how to enable Artifact Cache feature with authentication by using Azure portal.
-This article walks you through the steps of enabling Artifact Cache with authentication by using the Azure CLI. You have to use the Credential set to make an authenticated pull or to access a private repository.
+This article walks you through the steps of enabling Artifact Cache with authentication by using the Azure CLI. You have to use the Credentials to make an authenticated pull or to access a private repository.
## Prerequisites * You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
-* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials]
+* You have an existing Key Vault to store the credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials]
* You can set and retrieve secrets from your Key Vault. Learn more about [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret] ## Configure Artifact Cache with authentication - Azure CLI
-### Create a Credential Set - Azure CLI
+### Create Credentials - Azure CLI
-Before configuring a Credential Set, you have to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
+Before configuring the Credentials, you have to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
-1. Run [az acr credential set create][az-acr-credential-set-create] command to create a credential set.
+1. Run [az acr credential set create][az-acr-credential-set-create] command to create the credentials.
- - For example, To create a credential set for a given `MyRegistry` Azure Container Registry.
+ - For example, To create the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr credential-set create
Before configuring a Credential Set, you have to create and store secrets in the
2. Run [az acr credential set update][az-acr-credential-set-update] to update the username or password KV secret ID on a credential set.
- - For example, to update the username or password KV secret ID on a credential set a given `MyRegistry` Azure Container Registry.
+ - For example, to update the username or password KV secret ID on the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr credential-set update -r MyRegistry -n MyRule -p https://MyKeyvault.vault.azure.net/secrets/newsecretname ```
-3. Run [az-acr-credential-set-show][az-acr-credential-set-show] to show a credential set.
+3. Run [az-acr-credential-set-show][az-acr-credential-set-show] to show the credentials.
- - For example, to show a credential set for a given `MyRegistry` Azure Container Registry.
+ - For example, to show the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr credential-set show -r MyRegistry -n MyCredSet ```
-### Create a cache rule with a Credential Set - Azure CLI
+### Create a cache rule with the Credentials - Azure CLI
1. Run [az acr cache create][az-acr-cache-create] command to create a cache rule.
- - For example, to create a cache rule with a credential set for a given `MyRegistry` Azure Container Registry.
+ - For example, to create a cache rule with the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu -c MyCredSet ```
-2. Run [az acr cache update][az-acr-cache-update] command to update the credential set on a cache rule.
+2. Run [az acr cache update][az-acr-cache-update] command to update the credentials on a cache rule.
- - For example, to update the credential set on a cache rule for a given `MyRegistry` Azure Container Registry.
+ - For example, to update the credentials on a cache rule for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr cache update -r MyRegistry -n MyRule -c NewCredSet ```
- - For example, to remove a credential set from an existing cache rule for a given `MyRegistry` Azure Container Registry.
+ - For example, to remove the credentials from an existing cache rule for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr cache update -r MyRegistry -n MyRule --remove-cred-set
Before configuring a Credential Set, you have to create and store secrets in the
2. Run the [az keyvault set-policy][az-keyvault-set-policy] command to assign access to the Key Vault, before pulling the image.
- - For example, to assign permissions for the credential set access the KeyVault secret
+ - For example, to assign permissions for the credentials access the KeyVault secret
```azurecli-interactive az keyvault set-policy --name MyKeyVault \
Before configuring a Credential Set, you have to create and store secrets in the
az acr cache delete -r MyRegistry -n MyRule ```
-3. Run[az acr credential set list][az-acr-credential-set-list] to list the credential sets in an Azure Container Registry.
+3. Run[az acr credential set list][az-acr-credential-set-list] to list the credential in an Azure Container Registry.
- - For example, to list the credential sets for a given `MyRegistry` Azure Container Registry.
+ - For example, to list the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr credential-set list -r MyRegistry ```
-4. Run [az-acr-credential-set-delete][az-acr-credential-set-delete] to delete a credential set.
+4. Run [az-acr-credential-set-delete][az-acr-credential-set-delete] to delete the credentials.
- - For example, to delete a credential set for a given `MyRegistry` Azure Container Registry.
+ - For example, to delete the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr credential-set delete -r MyRegistry -n MyCredSet
container-registry Tutorial Enable Artifact Cache Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-auth.md
Follow the steps to create cache rule in the [Azure portal](https://portal.azure
docker pull myregistry.azurecr.io/hello-world:latest ```
-### Create new credentials
+### Create new Credentials
-Before configuring a Credential Set, you require to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
+Before configuring the Credentials, you require to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
-1. Navigate to **Credentials** > **Add credential set** > **Create new credentials**.
+1. Navigate to **Credentials** > **Create credentials**.
- :::image type="content" source="./media/container-registry-artifact-cache/add-credential-set-05.png" alt-text="Screenshot for adding credential set.":::
+ :::image type="content" source="./media/container-registry-artifact-cache/add-credential-set-05.png" alt-text="Screenshot for adding credentials.":::
- :::image type="content" source="./media/container-registry-artifact-cache/create-credential-set-06.png" alt-text="Screenshot for create new credential set.":::
+ :::image type="content" source="./media/container-registry-artifact-cache/create-credential-set-06.png" alt-text="Screenshot for create new credentials.":::
1. Enter **Name** for the new credentials for your source registry.
container-registry Tutorial Enable Artifact Cache Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-cli.md
This article is part three of a six-part tutorial series. [Part one](tutorial-ar
## Configure Artifact Cache - Azure CLI
-Follow the steps to create a Cache rule without using a Credential set.
+Follow the steps to create a Cache rule without using the Credentials.
### Create a Cache rule 1. Run [az acr Cache create][az-acr-cache-create] command to create a Cache rule.
- - For example, to create a Cache rule without a credential set for a given `MyRegistry` Azure Container Registry.
+ - For example, to create a Cache rule without the credentials for a given `MyRegistry` Azure Container Registry.
```azurecli-interactive az acr Cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu-
container-registry Tutorial Troubleshoot Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-artifact-cache.md
May include one or more of the following issues:
- Cached images don't appear in a real repository - [Cached images don't appear in a live repository](tutorial-troubleshoot-artifact-cache.md#cached-images-dont-appear-in-a-live-repository) -- Credential set has an unhealthy status
- - [Unhealthy Credential Set](tutorial-troubleshoot-artifact-cache.md#unhealthy-credential-set)
+- Credentials has an unhealthy status
+ - [Unhealthy Credentials](tutorial-troubleshoot-artifact-cache.md#unhealthy-credentials)
- Unable to create a cache rule - [Cache rule Limit](tutorial-troubleshoot-artifact-cache.md#cache-rule-limit)
If you're having an issue with cached images not showing up in your repository i
The Azure portal autofills these fields for you. However, many Docker repositories begin with `library/` in their path. For example, in-order to cache the `hello-world` repository, the correct Repository Path is `docker.io/library/hello-world`.
-## Unhealthy Credential Set
+## Unhealthy Credentials
-Credential sets are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credential sets are often a result of these secrets no longer being valid. In the Azure portal, you can select the credential set, to edit and apply changes.
+Credentials are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credentials are often a result of these secrets no longer being valid. In the Azure portal, you can select the credentials, to edit and apply changes.
- Verify the secrets in Azure Key Vault haven't expired. - Verify the secrets in Azure Key Vault are valid.
Artifact Cache currently supports the following upstream registries:
| Upstream registries | Support | Availability | | | | -- |
-| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Docker Hub | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | ECR Public | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
-| Nivida | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+| Nvidia | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
| Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
cosmos-db Audit Control Plane Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-control-plane-logs.md
Use the following steps to enable logging on control plane operations:
1. Select **ControlPlaneRequests** for log type and select the **Send to Log Analytics** option.
+1. Optionally, send the diagnostic logs to Azure Storage, Azure Event Hubs, Azure Monitor, or a third party.
+ You can also store the logs in a storage account or stream to an event hub. This article shows how to send logs to log analytics and then query them. After you enable, it takes a few minutes for the diagnostic logs to take effect. All the control plane operations performed after that point can be tracked. The following screenshot shows how to enable control plane logs: :::image type="content" source="./media/audit-control-plane-logs/enable-control-plane-requests-logs.png" alt-text="Enable control plane requests logging":::
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
The exact RTT latency is a function of speed-of-light distance and the Azure net
> [!NOTE] > The RU/s cost of reads for Local Minority reads are twice that of weaker consistency levels because reads are made from two replicas to provide consistency guarantees for Strong and Bounded Staleness.
+> [!NOTE]
+> The RU/s cost of reads for the strong and bounded staleness consistency levels consume approximately two times more RUs while performing read operations when compared to that of other relaxed consistency levels.
+ ## <a id="rto"></a>Consistency levels and data durability Within a globally distributed database environment, there's a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as **recovery point objective** (**RPO**).
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md
You can set up Private Link by creating a private endpoint in a virtual network
Use the following code to create a Resource Manager template named *PrivateEndpoint_template.json*. This template creates a private endpoint for an existing Azure Cosmos DB vAPI for NoSQL account in an existing virtual network.
+### [Bicep](#tab/arm-bicep)
+
+```bicep
+@description('Location for all resources.')
+param location string = resourceGroup().location
+param privateEndpointName string
+param resourceId string
+param groupId string
+param subnetId string
+
+resource privateEndpoint 'Microsoft.Network/privateEndpoints@2019-04-01' = {
+ name: privateEndpointName
+ location: location
+ properties: {
+ subnet: {
+ id: subnetId
+ }
+ privateLinkServiceConnections: [
+ {
+ name: 'MyConnection'
+ properties: {
+ privateLinkServiceId: resourceId
+ groupIds: [
+ groupId
+ ]
+ requestMessage: ''
+ }
+ }
+ ]
+ }
+}
+
+output privateEndpointNetworkInterface string = privateEndpoint.properties.networkInterfaces[0].id
+```
+
+### [JSON](#tab/arm-json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
Use the following code to create a Resource Manager template named *PrivateEndpo
} ``` ++ **Define the parameters file for the template** Create a parameters file for the template, and name it *PrivateEndpoint_parameters.json*. Add the following code to the parameters file:
+### [Bicep / JSON](#tab/arm-bicep+arm-json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "privateEndpointName": {
Create a parameters file for the template, and name it *PrivateEndpoint_paramete
} ``` ++ **Deploy the template by using a PowerShell script** Create a PowerShell script by using the following code. Before you run the script, replace the subscription ID, resource group name, and other variable values with the details for your environment.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Not available
## Restore a continuous account that is configured with managed identity
-A user-assigned identity is required in the restore request because the source account managed identity (User-assigned and System-assigned identities) cannot be carried over automatically to the target database account.
+A user-assigned identity is required in the restore request because the source account managed identity (User-assigned and System-assigned identities) can't be carried over automatically to the target database account.
### [Azure CLI](#tab/azure-cli)
Use the Azure CLI to restore a continuous account that is already configured usi
--default-identity "UserAssignedIdentity=$identityId" \ ```
-1. Once the restore has completed, the target (restored) account will have the user-assigned identity. If desired, user can update the account to use System-Assigned managed identity.
+1. Once the restore has completed, the target (restored) account has the user-assigned identity. If desired, user can update the account to use System-Assigned managed identity.
### [PowerShell / Azure Resource Manager template / Azure portal](#tab/azure-powershell+arm-template+azure-portal)
The following conditions are necessary to successfully restore a periodic backup
### How do customer-managed keys affect continuous backups?
-Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must use a system-assigned or user-assigned managed identity in the Key Vault access policy. Azure Cosmos DB first-party identities is not currently supported on accounts using continuous backups.
+Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must use a system-assigned or user-assigned managed identity in the Key Vault access policy. Azure Cosmos DB first-party identities are not currently supported on accounts using continuous backups.
+
+Prerequisite steps for Customer Managed Keys enabled accounts to update user assigned identity.
+
+- Add a user-assigned identity to the Cosmos DB account, and grant permissions in key vault access policy.
+- Set the user-assigned as default identity via Azure CLI or ARM.
+
+```azurecli
+az cosmosdb update --resource-group MyResourceGroup --name MyAccountName --default-identity UserAssignedIdentity=/subscriptions/MySubscriptionId/resourcegroups/MyResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyUserAssignedIdentity
+```
The following conditions are necessary to successfully perform a point-in-time restore:
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/troubleshoot-common-issues.md
For Socket/Network-related exceptions, potential network connectivity issues mig
To check connectivity, follow these steps: ```
-nc -v <accountName>.documents.azure.com 10250
+nc -v <accountName>.mongocluster.cosmos.azure.com 10260
```
-If TCP connect to port 10250/10255 fails, an environment firewall may be blocking the Azure Cosmos DB connection. Kindly scroll down to the page's bottom to submit a support ticket.
+If TCP connect to port 10260 fails, an environment firewall may be blocking the Azure Cosmos DB connection. Kindly scroll down to the page's bottom to submit a support ticket.
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-consistency.md
documentClient = new DocumentClient(new Uri(endpoint), authKey, connectionPolicy
// Override consistency at the request level via request options RequestOptions requestOptions = new RequestOptions { ConsistencyLevel = ConsistencyLevel.Eventual };
-var response = await client.CreateDocumentAsync(collectionUri, document, requestOptions);
+var response = await client.ReadDocumentAsync(collectionUri, document, requestOptions);
``` # [.NET SDK V3](#tab/dotnetv3) ```csharp // Override consistency at the request level via request options
-ItemRequestOptions requestOptions = new ItemRequestOptions { ConsistencyLevel = ConsistencyLevel.Strong };
+ItemRequestOptions requestOptions = new ItemRequestOptions { ConsistencyLevel = ConsistencyLevel.Eventual };
var response = await client.GetContainer(databaseName, containerName)
- .CreateItemAsync(
+ .ReadItemAsync(
item, new PartitionKey(itemPartitionKey), requestOptions);
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
It's optional to specify the order. If not specified, the order is ascending.
"compositeIndexes":[ [ {
- "path":"/name",
+ "path":"/name"
}, {
- "path":"/age",
+ "path":"/age"
} ] ]
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md
new_item = {
"description":"Pick up strawberries", "isComplete":False }
-result = container.scripts.execute_stored_procedure(sproc=created_sproc,params=[[new_item]], partition_key=new_id)
+result = container.scripts.execute_stored_procedure(sproc=created_sproc,params=[new_item], partition_key=new_id)
```
The following code shows how to call a pretrigger using the Python SDK:
```python item = {'category': 'Personal', 'name': 'Groceries', 'description': 'Pick up strawberries', 'isComplete': False}
-container.create_item(item, {'pre_trigger_include': 'trgPreValidateToDoItemTimestamp'})
+
+result = container.create_item(item, pre_trigger_include='trgPreValidateToDoItemTimestamp')
```
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/transactional-batch.md
if (response.IsSuccessStatusCode)
``` > [!IMPORTANT]
-> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). If the operation fails because it tries to create an item that already exists, a status code of 409 (conflict) is returned. The status code enables one to identify the cause of transaction failure.
### [Java](#tab/java)
if (response.isSuccessStatusCode())
``` > [!IMPORTANT]
-> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). If the operation fails because it tries to create an item that already exists, a status code of 409 (conflict) is returned. The status code enables one to identify the cause of transaction failure.
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
If your container could grow to more than a few physical partitions, then you sh
## Use item ID as the partition key
-If your container has a property that has a wide range of possible values, it's likely a great partition key choice. One possible example of such a property is the *item ID*. For small read-heavy containers or write-heavy containers of any size, the *item ID* is naturally a great choice for the partition key.
+> [!NOTE]
+> This section primarily applies to the API for NoSQL. Other APIs, such as the API for Gremlin, do not support the unique identifier as the partition key.
+
+If your container has a property that has a wide range of possible values, it's likely a great partition key choice. One possible example of such a property is the *item ID*. For small read-heavy containers or write-heavy containers of any size, the *item ID* (`/id`) is naturally a great choice for the partition key.
The system property *item ID* exists in every item in your container. You may have other properties that represent a logical ID of your item. In many cases, these IDs are also great partition key choices for the same reasons as the *item ID*.
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
az cosmosdb restore \
--restore-timestamp 2020-07-13T16:03:41+0000 \ --resource-group <MyResourceGroup> \ --location "West US" \
- --enable-public-network False
+ --public-network-access Disabled
```
-If `--enable-public-network` is not set, restored account is accessible from public network. Please ensure to pass `False` to the `--enable-public-network` option to prevent public network access for restored account.
+If `--public-network-access` is not set, restored account is accessible from public network. Please ensure to pass `Disabled` to the `--public-network-access` option to prevent public network access for restored account.
> [!NOTE] > For restoring with public network access disabled, you'll need to install the cosmosdb-preview 0.23.0 of CLI extension by executing `az extension update --name cosmosdb-preview `. You would also require version 2.17.1 of the CLI.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/create.md
This script uses the following commands. Each command in the table links to comm
| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos DB for NoSQL container. | | [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
+> [!IMPORTANT]
+> Use `az cosmsodb sql database create` to create a NoSQL database. The `az cosmosdb database create` command is deprecated.
+ ## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Azure Cosmos DB provides three ways to control access to your data.
Primary/secondary keys provide access to all the administrative resources for the database account. Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is to let you regenerate, or roll keys, providing continuous access to your account and data. To learn more about primary/secondary keys, see the [Database security](database-security.md#primary-keys) article.
+To see your account keys, navigate to Keys from the left menu. Then, click on the ΓÇ£viewΓÇ¥ icon at the right of each key. Click on the copy button to copy the selected key. You can hide them afterwards by clicking the same icon per key, which will be updated as a ΓÇ£hideΓÇ¥ button.
++ ### <a id="key-rotation"></a> Key rotation and regeneration > [!NOTE]
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-work-scopes.md
description: This article helps you understand billing and resource management scopes available in Azure and how to use the scopes in Cost Management and APIs. Previously updated : 05/10/2023 Last updated : 09/22/2023
The following tables show how Cost Management features can be utilized by each r
| **Budgets/Reservation utilization alerts** | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | | **Exports** | Create, Read, Update, Delete | Create, Read, Update, Delete |
-| **Cost Allocation Rules** | Create, Read, Update, Delete | Create, Read, Update, Delete |
+| **Cost Allocation Rules** | Create, Read, Update, Delete | Read |
#### Department scope
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-charges-free-account.md
Title: Avoid charges with your Azure free account description: Understand why you see charges for your Azure free account. Learn ways to avoid these charges.-++ tags: billing
cost-management-billing Azurestudents Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azurestudents-subscription-disabled.md
Title: Reactivate disabled Azure for Students subscription description: Explains why your Azure for Students subscription is disabled and how to reactivate it.-++ tags: billing
cost-management-billing Check Free Service Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/check-free-service-usage.md
Title: Monitor and track Azure free service usage description: Learn how to check free service usage in the Azure portal. There's no charge for services included in a free account unless you go over the service limits.-++ tags: billing
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md
Title: Transfer Azure product billing ownership to your Microsoft Partner Agreement (MPA) description: Learn how to request billing ownership of Azure billing products from other users for a Microsoft Partner Agreement (MPA).-++ tags: billing Previously updated : 03/29/2023 Last updated : 09/22/2023
The partners should work with the customer to get access to subscriptions. The p
### Power BI connectivity
-The Cost Management connector for Power BI doesn't currently support Microsoft Partner Agreements. The connector only supports Enterprise Agreements and direct Microsoft Customer Agreements. For more information about Cost Management connector support, see [Create visuals and reports with the Cost Management connector in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management). After you transfer a subscription from one of the agreements to a Microsoft Partner Agreement, your Power BI reports stop working.
+The Cost Management connector for Power BI supports Enterprise Agreements, direct Microsoft Customer Agreements and Microsoft Partner Agreements on Billing Account and Billing Profile scopes. For more information about Cost Management connector support, see [Create visuals and reports with the Cost Management connector in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management). After you transfer a subscription from one of the agreements to a Microsoft Partner Agreement, your Power BI reports stop working.
As an alternative, you can always use Exports in Cost Management to save the consumption and usage information and then use it in Power BI. For more information, see [Create and manage exported data](../costs/tutorial-export-acm-data.md).
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 08/04/2023 Last updated : 09/22/2023
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. | | EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans automatically get transferred during EA to EA transfers, except in transfers with a currency change.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
-| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation and savings plan transfers with no currency change are supported.<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
+| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products but not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. When there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](../manage/ea-transfers.md#prerequisites-1).<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | | MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
An Azure integration runtime can:
- Run Data Flows in Azure - Run copy activities between cloud data stores-- Dispatch the following transform activities in a public network: Databricks Notebook/ Jar/ Python activity, HDInsight Hive activity, HDInsight Pig activity, HDInsight MapReduce activity, HDInsight Spark activity, HDInsight Streaming activity, ML Studio (classic) Batch Execution activity, ML Studio (classic) Update Resource activities, Stored Procedure activity, Data Lake Analytics U-SQL activity, .NET custom activity, Web activity, Lookup activity, and Get Metadata activity.
+- Dispatch the following transform activities in a public network:
+ - .NET custom activity
+ - Azure Function activity
+ - Databricks Notebook/ Jar/ Python activity
+ - Data Lake Analytics U-SQL activity
+ - Get Metadata activity
+ - HDInsight Hive activity
+ - HDInsight Pig activity
+ - HDInsight MapReduce activity
+ - HDInsight Spark activity
+ - HDInsight Streaming activity
+ - Lookup activity
+ - Machine Learning Studio (classic) Batch Execution activity
+ - Machine Learning Studio (classic) Update Resource activity
+ - Stored Procedure activity
+ - Validation activity
+ - Web activity
+
### Azure IR network environment
For information about creating and configuring an Azure IR, see [How to create a
A self-hosted IR is capable of: - Running copy activity between a cloud data stores and a data store in private network.-- Dispatching the following transform activities against compute resources in on-premises or Azure Virtual Network: HDInsight Hive activity (BYOC-Bring Your Own Cluster), HDInsight Pig activity (BYOC), HDInsight MapReduce activity (BYOC), HDInsight Spark activity (BYOC), HDInsight Streaming activity (BYOC), ML Studio (classic) Batch Execution activity, ML Studio (classic) Update Resource activities, Stored Procedure activity, Data Lake Analytics U-SQL activity, Custom activity (runs on Azure Batch), Lookup activity, and Get Metadata activity.
+- Dispatching the following transform activities against compute resources in on-premises or Azure Virtual Network:
+ - Azure Function activity
+ - Custom activity (runs on Azure Batch)
+ - Data Lake Analytics U-SQL activity
+ - Get Metadata activity
+ - HDInsight Hive activity (BYOC-Bring Your Own Cluster)
+ - HDInsight Pig activity (BYOC)
+ - HDInsight MapReduce activity (BYOC)
+ - HDInsight Spark activity (BYOC)
+ - HDInsight Streaming activity (BYOC)
+ - Lookup activity
+ - Machine Learning Studio (classic) Batch Execution activity
+ - Machine Learning Studio (classic) Update Resource activity
+ - Machine Learning Execute Pipeline activity
+ - Stored Procedure activity
+ - Validation activity
+ - Web activity
> [!NOTE] > Use self-hosted integration runtime to support data stores that require bring-your-own driver, such as SAP Hana, MySQL, etc. For more information, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Continuous Integration Delivery Automate Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-github-actions.md
+
+ Title: Automate continuous integration with GitHub Actions
+description: Learn how to automate continuous integration in Azure Data Factory with GitHub Actions.
++++++ Last updated : 08/29/2023 ++
+# Automate continuous integration and delivery using GitHub Actions
++
+In this guide, we show how to do continuous integration and delivery in Azure Data Factory with GitHub Actions. This is done using workflows. A workflow is defined by a YAML file that contains the various steps and parameters that make up the workflow.
+
+The workflow leverages the [automated publishing capability](continuous-integration-delivery-improvements.md) of Azure Data Factory. And the [Azure Data Factory Deploy Action](https://github.com/marketplace/actions/data-factory-deploy) from the GitHub Marketplace that uses the [pre- and post-deployment script](continuous-integration-delivery-sample-script.md).
+
+## Requirements
+
+- Azure Subscription - if you don't have one, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+- Azure Data Factory - you need two instances, one development instance that is the source of changes. And a second one where changes are propagated with the workflow. If you don't have an existing Data Factory instance, follow this [tutorial](quickstart-create-data-factory.md) to create one.
+
+- GitHub repository integration set up - if you don't have a GitHub repository connected to your development Data Factory, follow the [tutorial](source-control.md#github-settings) to connect it.
+
+## Create a user-assigned managed identity
+
+You need credentials that authenticate and authorize GitHub Actions to deploy your ARM template to the target Data Factory. We leverage a user-assigned managed identity (UAMI) with [workload identity federation](../active-directory/workload-identities/workload-identity-federation.md). Using workload identity federation allows you to access Azure Active Directory (Azure AD) protected resources without needing to manage secrets. In this scenario, GitHub Actions are able to access the Azure resource group and deploy the target Data Factory instance.
+
+Follow the tutorial to [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). Once the UAMI is created, browse to the Overview page and take a note of the Subscription ID and Client ID. We need these values later.
+
+## Configure the workload identity federation
+
+1. Follow the tutorial to [configure a federated identity credential on a user-assigned managed identity](../active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md#configure-a-federated-identity-credential-on-a-user-assigned-managed-identity).
+
+ Here is an example of a federated identity configuration:
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/add-federated-credential.png" lightbox="media/continuous-integration-delivery-github-actions/add-federated-credential.png"alt-text="Screenshot of adding Federated Credential in Azure Portal.":::
+
+2. After creating the credential, navigate to Azure Active Directory Overview page and take a note of the tenant ID. We need this value later.
+
+3. Browse to the Resource Group containing the target Data Factory instance and assign the UAMI the [Data Factory Contributor role](concepts-roles-permissions.md#roles-and-requirements).
+
+> [!IMPORTANT]
+> In order to avoid authorization errors during deployment, be sure to assign the Data Factory Contributor role at the Resource Group level containing the target Data Factory instance.
+
+## Configure the GitHub secrets
+
+You need to provide your application's Client ID, Tenant ID and Subscription ID to the login action. These values can be stored in GitHub secrets and referenced in your workflow.
+1. Open your GitHub repository and go to Settings.
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/github-settings.png" lightbox="media/continuous-integration-delivery-github-actions/github-settings.png" alt-text="Screenshot of navigating to GitHub Settings.":::
+
+2. Select Security -> Secrets and variables -> Actions.
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/github-secrets.png" lightbox="media/continuous-integration-delivery-github-actions/github-secrets.png" alt-text="Screenshot of navigating to GitHub Secrets.":::
+
+3. Create secrets for AZURE_CLIENT_ID, AZURE_TENANT_ID, and AZURE_SUBSCRIPTION_ID. Use these values from your Azure Active Directory application for your GitHub secrets:
+
+ | GitHub Secret | Azure Active Directory Application |
+ ||-|
+ | AZURE_CLIENT_ID | Application (client) ID |
+ | AZURE_TENANT_ID | Directory (tenant) ID |
+ | AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+4. Save each secret by selecting Add secret.
+
+## Create the workflow that deploys the Data Factory ARM template
+
+At this point, you must have a Data Factory instance with git integration set up. If not, follow the links in the Requirements section.
+
+The workflow is composed of two jobs:
+
+- **A build job** which uses the npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities) to (1) validate all the Data Factory resources in the repository. You get the same validation errors as when "Validate All" is selected in Data Factory Studio. And (2) export the ARM template that is later used to deploy to the QA or Staging environment.
+- **A release job** which takes the exported ARM template artifact and deploys it to the higher environment Data Factory instance.
+
+1. Navigate to the repository connected to your Data Factory, under your root folder (ADFroot in the below example) create a build folder where you store the package.json file:
+
+ ```json
+ {
+ "scripts":{
+ "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index"
+ },
+ "dependencies":{
+ "@microsoft/azure-data-factory-utilities":"^1.0.0"
+ }
+ }
+ ```
+
+ The setup should look like:
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/saving-package-json-file.png" lightbox="media/continuous-integration-delivery-github-actions/saving-package-json-file.png" alt-text="Screenshot of saving the package.json file in GitHub.":::
+
+2. Navigate to the Actions tab -> New workflow
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/new-workflow.png" lightbox="media/continuous-integration-delivery-github-actions/new-workflow.png" alt-text="Screenshot of creating a new workflow in GitHub.":::
+
+3. Paste the workflow YAML.
+
+```yml
+on:
+ push:
+ branches:
+ - main
+
+permissions:
+ id-token: write
+ contents: read
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+
+ - uses: actions/checkout@v3
+# Installs Node and the npm packages saved in your package.json file in the build
+ - name: Setup Node.js environment
+ uses: actions/setup-node@v3.4.1
+ with:
+ node-version: 14.x
+
+ - name: install ADF Utilities package
+ run: npm install
+ working-directory: ${{github.workspace}}/ADFroot/build # (1) provide the folder location of the package.json file
+
+# Validates all of the Data Factory resources in the repository. You'll get the same validation errors as when "Validate All" is selected.
+ - name: Validate
+ run: npm run build validate ${{github.workspace}}/ADFroot/ /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<ADFname> # (2) The validate command needs the root folder location of your repository where all the objects are stored. And the 2nd parameter is the resourceID of the ADF instance
+ working-directory: ${{github.workspace}}/ADFroot/build
+
+
+ - name: Validate and Generate ARM template
+ run: npm run build export ${{github.workspace}}/ADFroot/ /subscriptions/<subID>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<ADFname> "ExportedArmTemplate" # (3) The build command, as validate, needs the root folder location of your repository where all the objects are stored. And the 2nd parameter is the resourceID of the ADF instance. The 3rd parameter is the exported ARM template artifact name
+ working-directory: ${{github.workspace}}/ADFroot/build
+
+# In order to leverage the artifact in another job, we need to upload it with the upload action
+ - name: upload artifact
+ uses: actions/upload-artifact@v3
+ with:
+ name: ExportedArmTemplate # (4) use the same artifact name you used in the previous export step
+ path: ${{github.workspace}}/ADFroot/build/ExportedArmTemplate
+
+ release:
+ needs: build
+ runs-on: ubuntu-latest
+ steps:
+
+ # we 1st download the previously uploaded artifact so we can leverage it later in the release job
+ - name: Download a Build Artifact
+ uses: actions/download-artifact@v3.0.2
+ with:
+ name: ExportedArmTemplate # (5) Artifact name
++
+ - name: Login via Az module
+ uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ enable-AzPSSession: true
+
+ - name: data-factory-deploy
+ uses: Azure/data-factory-deploy-action@v1.2.0
+ with:
+ resourceGroupName: # (6) your target ADF resource group name
+ dataFactoryName: # (7) your target ADF name
+ armTemplateFile: # (8) ARM template file name ARMTemplateForFactory.json
+ armTemplateParametersFile: # (9) ARM template parameters file name ARMTemplateParametersForFactory.json
+ additionalParameters: # (10) Parameters which will be replaced in the ARM template. Expected format 'key1=value key2=value keyN=value'. At the minimum here you should provide the target ADF name parameter. Check the ARMTemplateParametersForFactory.json file for all the parameters that are expected in your scenario
+```
+
+LetΓÇÖs walk together through the workflow. It contains parameters that are numbered for your convenience and comments describe what each expects.
+
+For the build job, there are four parameters you need to provide. For more detailed information about these, check the npm package [Azure Data Factory utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities) documentation.
+
+> [!TIP]
+> Use the same artifact name in the Export, Upload and Download actions.
+
+In the Release job, there are the next six parameters you need to supply. For more details about these, please check the [Azure Data Factory Deploy Action GitHub Marketplace listing](https://github.com/marketplace/actions/data-factory-deploy).
+
+## Monitor the workflow execution
+
+LetΓÇÖs test the setup by making some changes in the development Data Factory instance. Create a feature branch and make some changes. Then make a pull request to the main branch. This triggers the workflow to execute.
+
+1. To check it, browse to the repository -> Actions -> and identify your workflow.
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/monitoring-workflow.png" lightbox="media/continuous-integration-delivery-github-actions/monitoring-workflow.png" alt-text="Screenshot showing monitoring a workflow in GitHub.":::
+
+2. You can further drill down into each run, see the jobs composing it and their statuses and duration, as well as the Artifact created by the run. In our scenario, this is the ARM template created in the build job.
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/monitoring-jobs.png" lightbox="media/continuous-integration-delivery-github-actions/monitoring-jobs.png" alt-text="Screenshot showing monitoring jobs in GitHub.":::
+
+3. You can further drill down by navigating to a job and its steps.
+
+ :::image type="content" source="media/continuous-integration-delivery-github-actions/monitoring-release-job.png" lightbox="media/continuous-integration-delivery-github-actions/monitoring-release-job.png" alt-text="Screenshot showing monitoring the release job in GitHub.":::
+
+4. You can also navigate to the target Data Factory instance to which you deployed changes to and make sure it reflects the latest changes.
+
+## Next steps
+
+- [Continuous integration and delivery overview](continuous-integration-delivery.md)
+- [Manually promote a Resource Manager template to each environment](continuous-integration-delivery-manual-promotion.md)
+- [Use custom parameters with a Resource Manager template](continuous-integration-delivery-resource-manager-custom-parameters.md)
+- [Linked Resource Manager templates](continuous-integration-delivery-linked-templates.md)
+- [Using a hotfix production environment](continuous-integration-delivery-hotfix-environment.md)
+- [Sample pre- and post-deployment script](continuous-integration-delivery-sample-script.md)
databox-gateway Data Box Gateway 1905 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-1905-release-notes.md
Title: Azure Data Box Gateway 1905 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Data Box Gateway 1905 running general availability release. -+ Last updated 11/11/2020-+ # Azure Data Box Edge and Azure Data Box Gateway 1905 release notes
databox-gateway Data Box Gateway 1906 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-1906-release-notes.md
Title: Azure Data Box Gateway & Azure Data Box Edge 1906 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Data Box Gateway and Azure Data Box Edge running 1906 release. -+ Last updated 11/11/2020-+ # Azure Data Box Edge and Azure Data Box Gateway 1906 release notes
databox-gateway Data Box Gateway 1911 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-1911-release-notes.md
Title: Azure Stack Edge & Azure Data Box Gateway 1911 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Stack Edge and Data Box Gateway running 1911 release. -+ Last updated 11/11/2020-+ # Azure Stack Edge and Azure Data Box Gateway 1911 release notes
databox-gateway Data Box Gateway 2007 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2007-release-notes.md
Title: Azure Stack Edge & Azure Data Box Gateway 2007 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Stack Edge and Data Box Gateway running 2007 release. -+ Last updated 11/11/2020-+ # Azure Stack Edge and Azure Data Box Gateway 2007 release notes
databox-gateway Data Box Gateway 2101 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2101-release-notes.md
Title: Azure Data Box Gateway 2101 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2101 release. -+ Last updated 01/29/2021-+ # Azure Data Box Gateway 2101 release notes
databox-gateway Data Box Gateway 2105 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2105-release-notes.md
Title: Azure Data Box Gateway 2105 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2105 release. -+ Last updated 01/07/2022-+ # Azure Data Box Gateway 2105 release notes
databox-gateway Data Box Gateway 2301 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2301-release-notes.md
Title: Azure Data Box Gateway 2301 release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2301 release. -+ Last updated 02/15/2023-+ # Azure Data Box Gateway 2301 release notes
databox-gateway Data Box Gateway Apply Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-apply-updates.md
Title: Install Update on Azure Data Box Gateway series device | Microsoft Docs description: Describes how to apply updates using the Azure portal and local web UI for Azure Data Box Gateway series device -+ Last updated 10/14/2020-+ # Update your Azure Data Box Gateway
databox-gateway Data Box Gateway Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-connect-powershell-interface.md
Title: Use Windows PowerShell to connect to and manage Azure Data Box Gateway device description: Describes how to connect to and then manage Data Box Gateway via the Windows PowerShell interface. -+ Last updated 10/20/2020-+ # Manage an Azure Data Box Gateway device via Windows PowerShell
databox-gateway Data Box Gateway Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-deploy-add-shares.md
Title: Transfer data with Azure Data Box Gateway | Microsoft Docs description: Learn how to add and connect to shares on your Azure Data Box Gateway, then your Data Box Gateway device can transfer data to Azure. -+ Last updated 07/06/2021-+ #Customer intent: As an IT admin, I need to understand how to add and connect to shares on Data Box Gateway so I can use it to transfer data to Azure. # Tutorial: Transfer data with Azure Data Box Gateway
databox-gateway Data Box Gateway Deploy Connect Setup Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-deploy-connect-setup-activate.md
Title: Connect to, configure, and activate Azure Data Box Gateway in Azure portal description: Third tutorial to deploy Data Box Gateway instructs you to connect, set up, and activate your virtual device. -+ Last updated 03/18/2019-+ #Customer intent: As an IT admin, I need to understand how to connect and activate Data Box Gateway so I can use it to transfer data to Azure. # Tutorial: Connect, set up, activate Azure Data Box Gateway
databox-gateway Data Box Gateway Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-deploy-prep.md
Title: Tutorial on prepare Azure portal to deploy Data Box Gateway | Microsoft Docs description: First tutorial to deploy Azure Data Box Gateway involves preparing the Azure portal. -+ Last updated 03/01/2021-+ #Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Data Box Gateway so I can use it to transfer data to Azure.
databox-gateway Data Box Gateway Deploy Provision Hyperv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-deploy-provision-hyperv.md
Title: Tutorial on provision Azure Data Box Gateway in Hyper-V | Microsoft Docs description: Second tutorial to deploy Azure Data Box Gateway involves provisioning a virtual device in Hyper-V. -+ Last updated 05/26/2021-+ #Customer intent: As an IT admin, I need to understand how to provision a virtual device for Data Box Gateway in Hyper-V so I can use it to transfer data to Azure. # Tutorial: Provision Azure Data Box Gateway in Hyper-V
databox-gateway Data Box Gateway Deploy Provision Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-deploy-provision-vmware.md
Title: Tutorial on provision Azure Data Box Gateway in VMware | Microsoft Docs description: Second tutorial to deploy Azure Data Box Gateway involves provisioning a virtual device in VMware. -+ Last updated 11/10/2021-+ #Customer intent: As an IT admin, I need to understand how to provision a virtual device for Data Box Gateway in VMware so I can use it to transfer data to Azure. # Tutorial: Provision Azure Data Box Gateway in VMware
databox-gateway Data Box Gateway Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-limits.md
Title: Azure Data Box Gateway limits | Microsoft Docs description: Describes system limits and recommended sizes for the Microsoft Azure Data Box Gateway. -+ Last updated 10/20/2020-+ # Azure Data Box Gateway limits
databox-gateway Data Box Gateway Manage Access Power Connectivity Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-manage-access-power-connectivity-mode.md
Title: Azure Data Box Gateway device access, power, and connectivity mode description: Describes how to manage access, power, and connectivity mode for the Azure Data Box Gateway device that helps transfer data to Azure -+ Last updated 10/14/2020-+
databox-gateway Data Box Gateway Manage Bandwidth Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-manage-bandwidth-schedules.md
Title: Manage bandwidth schedules on Azure Data Box Gateway | Microsoft Docs description: Describes how to use the Azure portal to manage bandwidth schedules on your Azure Data Box Gateway. -+ Last updated 10/14/2020-+ # Use the Azure portal to manage bandwidth schedules on your Azure Data Box Gateway
databox-gateway Data Box Gateway Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-manage-shares.md
Title: Azure Data Box Gateway manage shares | Microsoft Docs description: Describes how to use the Azure portal to manage shares on your Azure Data Box Gateway. -+ Last updated 03/25/2019-+ # Use the Azure portal to manage shares on your Azure Data Box Gateway
databox-gateway Data Box Gateway Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-manage-users.md
Title: Azure Data Box Gateway manage users | Microsoft Docs description: Describes how to use the Azure portal to manage users on your Azure Data Box Gateway. -+ Last updated 03/25/2019-+ # Use the Azure portal to manage users on your Azure Data Box Gateway
databox-gateway Data Box Gateway Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-monitor.md
Title: Monitor your Azure Data Box Gateway device | Microsoft Docs description: Describes how to use the Azure portal and local web UI to monitor your Azure Data Box Gateway. -+ Last updated 10/20/2020-+ # Monitor your Azure Data Box Gateway
databox-gateway Data Box Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-overview.md
Title: Microsoft Azure Data Box Gateway overview | Microsoft Docs description: Describes Azure Data Box Gateway, a virtual appliance storage solution that enables you to transfer data into Azure -+ Last updated 05/26/2021-+ #Customer intent: As an IT admin, I need to understand what Data Box Gateway is and how it works so I can use it to send data to Azure. # What is Azure Data Box Gateway?
databox-gateway Data Box Gateway Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-release-notes.md
Title: Azure Data Box Gateway General Availability release notes| Microsoft Docs description: Describes critical open issues and resolutions for the Azure Data Box Gateway running general availability release. -+ Last updated 11/11/2020-+ # Azure Data Box Edge/Azure Data Box Gateway General Availability release notes
databox-gateway Data Box Gateway Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-security.md
Title: Azure Data Box Gateway security | Microsoft Docs description: Describes the security and privacy features that protect your Azure Data Box Gateway virtual device, service, and data, on-premises and in the cloud. -+ Last updated 10/20/2020-+ # Azure Data Box Gateway security and data protection
databox-gateway Data Box Gateway System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-system-requirements.md
Title: Microsoft Azure Data Box Gateway system requirements| Microsoft Docs description: Learn about the software and networking requirements for your Azure Data Box Gateway -+ Last updated 03/24/2022-+ # Azure Data Box Gateway system requirements
databox-gateway Data Box Gateway Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-troubleshoot.md
Title: Use the Azure portal to troubleshoot Azure Data Box Gateway | Microsoft Docs description: Learn how to troubleshoot issues on your Azure Data Box Gateway. You can run diagnostics, collect information for Support, and use logs to troubleshoot. -+ Last updated 06/09/2021-+ # Troubleshoot your Azure Data Box Gateway issues
databox-gateway Data Box Gateway Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-use-cases.md
Title: Microsoft Azure Data Box Gateway use cases | Microsoft Docs description: Describes the use cases for Azure Data Box Gateway, a virtual appliance storage solution that lets you transfer data to Azure, -+ Last updated 09/28/2021-+ # Use cases for Azure Data Box Gateway
databox-online Azure Stack Edge Gpu 2309 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2309-release-notes.md
+
+ Title: Azure Stack Edge 2309 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2309 release.
++
+
+++ Last updated : 09/21/2023+++
+# Azure Stack Edge 2309 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2309 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2309** release, which maps to software version **3.2.2380.1652**.
+
+## Supported update paths
+
+To apply the 2309 update, your device must be running version 2203 or later.
+
+ - If you are not running the minimum required version, you'll see this error:
+
+ *Update package cannot be installed as its dependencies are not met.*
+
+ - You can update to 2203 from 2207 or later, and then install 2309.
+
+You can update to the latest version using the following update paths:
+
+| Current version of Azure Stack Edge software and Kubernetes | Update to Azure Stack Edge software and Kubernetes | Desired update to 2309 |
+| --| --| --|
+|2207 |2303 |2309 |
+|2209 |2303 |2309 |
+|2210 |2303 |2309 |
+|2301 |2303 |2309 |
+|2303 |Directly to |2309 |
+
+## What's new
+
+The 2309 release has the following new features and enhancements:
+
+- Beginning this release, you have the option of selecting Kubernetes profiles based on your workloads. You can also configure Maximum Transmission Unit (MTU) for the network interfaces on your device.
+- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).
+
+## Issues fixed in this release
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Core Azure Stack Edge platform and Azure Kubernetes Service (AKS) on Azure Stack Edge |Critical bug fixes to improve workload availability during two-node Azure Stack Edge update of core Azure Stack Edge platform and AKS on Azure Stack Edge. |
+
+<!--## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Need known issues in 2303 |-->
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2303 release, there is an additional nodepool rollout. |The update may take longer. |
+|**28.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. |
+|**29.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**30.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 09/20/2023 Last updated : 09/21/2023 # Update your Azure Stack Edge Pro GPU
The current update is Update 2309. This update installs two updates, the device
The associated versions for this update are: -- Device software version: Azure Stack Edge 2309 (3.2.2380.1632)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2309 (3.2.2380.1632)-- Kubernetes server version: v1.24.6
+- Device software version: Azure Stack Edge 2309 (3.2.2380.1652)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2309 (3.2.2380.1652)
+- Kubernetes server version: v1.25.5
- IoT Edge version: 0.1.0-beta15-- Azure Arc version: 1.10.6-- GPU driver version: 515.65.01-- CUDA version: 11.7
+- Azure Arc version: 1.11.7
+- GPU driver version: 530.30.02
+- CUDA version: 12.1
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2304-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2309-release-notes.md).
-**To apply the 2309 update, your device must be running version 2207 or later.**
+**To apply the 2309 update, your device must be running version 2203 or later.**
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2309.
+- You can update to 2203 from 2207 or later, and then install 2309.
Supported update paths:
If you are running 2303, you can update both your device version and Kubernetes
In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2309.
-From the local UI, you will have to run each update separately: update the device version to 2303, then update Kubernetes version to 2210, and then update Kubernetes version to 2303, and then the third update gets both the device and the Kubernetes version to 2309.
+From the local UI, you will have to run each update separately: update the device version to 2303, update Kubernetes version to 2210, update Kubernetes version to 2303, and then the third update gets both the device version and Kubernetes version to 2309.
### Updates for a single-node vs two-node
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
Last updated 08/21/2023
# Determine resource usage and quota
-To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. You can see the default quota for each resource type by subscription type here:
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota.
Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
dev-box How To Hibernate Your Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-hibernate-your-dev-box.md
In addition, you can also double select on your dev box in the list of VMs you
You can use the CLI to hibernate your dev box: ```azurecli-interactive
-az devcenter dev dev-box stop --name <YourDevBoxName> --dev-center-name <YourDevCenterName> --project-name <YourProjectName> --user-id "me" --hibernate false
+az devcenter dev dev-box stop --name <YourDevBoxName> --dev-center-name <YourDevCenterName> --project-name <YourProjectName> --user-id "me" --hibernate true
``` To learn more about managing your dev box from the CLI, see: [devcenter reference](/cli/azure/devcenter/dev/dev-box?view=azure-cli-latest&preserve-view=true).
energy-data-services Concepts Tier Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-tier-details.md
The Standard tier of Azure Data Manager for energy is ideal for customers' produ
The standard tier is designed for production scenarios as it provides high availability, reliability and scale. The Standard tier includes the following: * Availability Zones
-* Disaster Recovery
+* Disaster Recovery\*
* Financial Backed Service Level Agreement * Higher Database Throughput * Higher data partition maximum * Higher support prioritization -
+(\*) Certain regions are restricted in supporting customer scenarios for disaster recovery. Please check the [Reliability in Azure Data Manager for Energy](reliability-energy-data-services.md) for more details on the regions which support disaster recovery.
## Tier details | Features | Developer Tier | Standard Tier |
Support | Yes | Yes
Azure Customer Managed Encryption Keys|Yes| Yes Azure Private Links|Yes| Yes Financial Backed Service Level Agreement (SLA) Credits | No | Yes
-Disaster Recovery |No| Yes
+Disaster Recovery |No| Yes\*
Availability Zones |No| Yes Database Throughput |Low| High Included Data Partition | 1| 1 Maximum Data Partition |5 | 10
+(\*) Certain regions are restricted in supporting customer scenarios for disaster recovery. Please check the [Reliability in Azure Data Manager for Energy](reliability-energy-data-services.md) for more details on the regions which support disaster recovery.
+
+> [!IMPORTANT]
+> Disaster recovery is currently not available in Brazil South region. For more information please contact your Microsoft sales or customer representative.
+ ## How to participate
-You can easily create a Developer tier resource by going to Azure Marketplace, [create portal](https://portal.azure.com/#create/Microsoft.AzureDataManagerforEnergy), and select your desired tier.
+You can easily create a Developer tier resource by going to Azure Marketplace, [create portal](https://portal.azure.com/#create/Microsoft.AzureDataManagerforEnergy), and select your desired tier.
energy-data-services Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/reliability-energy-data-services.md
The Azure Data Manager for Energy supports availability zones in the following r
||-| | South Central US | North Europe | | East US | West Europe |
+| Brazil South | |
### Zone down experience During a zone-wide outage, no action is required during zone recovery. There may be a brief degradation of performance until the service self-heals and rebalances underlying capacity to adjust to healthy zones. During this period, you may experience 5xx errors and you may have to retry API calls until the service is restored.
Azure Data Manager for Energy is a regional service and, therefore, is susceptib
:::image type="content" source="media/reliability-energy-data-services/cross-region-disaster-recovery.png" alt-text="Diagram of Azure data manager for energy cross region disaster recovery workflow." lightbox="media/reliability-energy-data-services/cross-region-disaster-recovery.png":::
-Below is the list of primary and secondary regions:
+Below is the list of primary and secondary regions for regions where disaster recovery is supported:
| Geography | Primary | Secondary | ||-||
Below is the list of primary and secondary regions:
Azure Data Manager for Energy uses Azure Storage, Azure Cosmos DB and Elasticsearch index as underlying data stores for persisting your data partition data. These data stores offer high durability, availability, and scalability. Azure Data Manager for Energy uses [geo-zone-redundant storage](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or GZRS to automatically replicate data to a secondary region that's hundreds of miles away from the primary region. The same security features enabled in the primary region (for example, encryption at rest using your encryption key) to protect your data are applicable to the secondary region. Similarly, Azure Cosmos DB is a globally distributed data service, which replicates the metadata (catalog) across regions. Elasticsearch index snapshots are taken at regular intervals and geo-replicated to the secondary region. All inflight data are ephemeral and therefore subject to loss. For example, in-transit data that is part of an on-going ingestion job that isn't persisted yet is lost, and you must restart the ingestion process upon recovery.
+> [!IMPORTANT]
+> In the following regions, disaster recovery is not available. For more information please contact your Microsoft sales or customer representative.
+> 1. Brazil South
+ #### Set up disaster recovery and outage detection Azure Data Manager for Energy service continuously monitors service health in the primary region. If a hard service down failure is detected in the primary region, we attempt recovery before initiating failover to the secondary region on your behalf. We will notify you about the failover progress. Once the failover completes, you could connect to the Azure Data Manager for Energy resource in the secondary region and continue operations. However, there could be slight degradation in performance due to any capacity constraints in the secondary region.
If you [set up private links](how-to-set-up-private-links.md) to your Azure Data
## Next steps > [!div class="nextstepaction"]
-> [Reliability in Azure](../reliability/availability-zones-overview.md)
+> [Reliability in Azure](../reliability/availability-zones-overview.md)
event-grid Mqtt Topic Spaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-topic-spaces.md
A topic space represents multiple topics through a set of topic templates. Topic
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
-Topic spaces are used to simplify access control management by enabling you to grant publish or subscribe access to a group of topics at once instead of managing access for each individual topic. To publish or subscribe to any MQTT topic, you need to:
+Topic spaces are used to simplify access control management by enabling you to scope publish or subscribe access for a client group, to a group of topics at once instead of managing access for each individual topic. To publish or subscribe to any MQTT topic, you need to:
1. Create a **client** resource for each client that needs to communicate over MQTT. 2. Create a **client group** that includes the clients that need access to publish or subscribe on the same MQTT topic.
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
```powershell Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
- New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -AuthorizationKey $ERDirectAuthorization.AuthorizationKey
+ New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -ExpressRoutePort $ERPort -AuthorizationKey $ERDirectAuthorization.AuthorizationKey
``` ## Next steps
firewall Tutorial Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal.md
Title: Deploy and configure Azure Firewall in a hybrid network using the Azure portal
-description: In this article, you learn how to deploy and configure Azure Firewall using Azure portal.
+ Title: Deploy and configure Azure Firewall in a hybrid network by using the Azure portal
+description: In this article, you learn how to deploy and configure Azure Firewall by using the Azure portal.
#Customer intent: As an administrator, I want to control network access from an on-premises network to an Azure virtual network.
-# Deploy and configure Azure Firewall in a hybrid network using the Azure portal
+# Deploy and configure Azure Firewall in a hybrid network by using the Azure portal
When you connect your on-premises network to an Azure virtual network to create a hybrid network, the ability to control access to your Azure network resources is an important part of an overall security plan.
-You can use Azure Firewall to control network access in a hybrid network using rules that define allowed and denied network traffic.
+You can use Azure Firewall to control network access in a hybrid network by using rules that define allowed and denied network traffic.
For this article, you create three virtual networks: -- **VNet-Hub** - the firewall is in this virtual network.-- **VNet-Spoke** - the spoke virtual network represents the workload located on Azure.-- **VNet-Onprem** - The on-premises virtual network represents an on-premises network. In an actual deployment, it can be connected to with either a VPN or ExpressRoute connection. For simplicity, this procedure uses a VPN gateway connection, and an Azure-located virtual network is used to represent an on-premises network.
+- **VNet-Hub**: The firewall is in this virtual network.
+- **VNet-Spoke**: The spoke virtual network represents the workload located on Azure.
+- **VNet-Onprem**: The on-premises virtual network represents an on-premises network. In an actual deployment, you can connect to it by using either a virtual private network (VPN) connection or an Azure ExpressRoute connection. For simplicity, this article uses a VPN gateway connection, and an Azure-located virtual network represents an on-premises network.
-![Firewall in a hybrid network](media/tutorial-hybrid-ps/hybrid-network-firewall.png)
+![Diagram that shows a firewall in a hybrid network.](media/tutorial-hybrid-ps/hybrid-network-firewall.png)
-If you want to use Azure PowerShell instead to complete this procedure, see [Deploy and configure Azure Firewall in a hybrid network using Azure PowerShell](tutorial-hybrid-ps.md).
+If you want to use Azure PowerShell instead to complete the procedures in this article, see [Deploy and configure Azure Firewall in a hybrid network by using Azure PowerShell](tutorial-hybrid-ps.md).
> [!NOTE]
-> This article uses classic Firewall rules to manage the firewall. The preferred method is to use [Firewall Policy](../firewall-manager/policy-overview.md). To complete this procedure using Firewall Policy, see [Tutorial: Deploy and configure Azure Firewall and policy in a hybrid network using the Azure portal](tutorial-hybrid-portal-policy.md).
+> This article uses classic Azure Firewall rules to manage the firewall. The preferred method is to use an [Azure Firewall Manager policy](../firewall-manager/policy-overview.md). To complete this procedure by using an Azure Firewall Manager policy, see [Tutorial: Deploy and configure Azure Firewall and policy in a hybrid network using the Azure portal](tutorial-hybrid-portal-policy.md).
## Prerequisites
-A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premises networks. The hub-and-spoke architecture has the following requirements:
+A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure virtual networks and on-premises networks. The hub-and-spoke architecture has the following requirements:
-- Set **Use this virtual network's gateway or Route Server** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
+- Set **Use this virtual network's gateway or Route Server** when you're peering **VNet-Hub** to **VNet-Spoke**. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
- Additionally, routes to the gateway-connected virtual networks or on-premises networks automatically propagates to the routing tables for the peered virtual networks using the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
+ Additionally, routes to the gateway-connected virtual networks or on-premises networks automatically propagate to the routing tables for the peered virtual networks via the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
-- Set **Use the remote virtual network's gateways or Route Server** when you peer VNet-Spoke to VNet-Hub. If **Use the remote virtual network's gateways or Route Server** is set and **Use this virtual network's gateway or Route Server** on remote peering is also set, the spoke virtual network uses gateways of the remote virtual network for transit.-- To route the spoke subnet traffic through the hub firewall, you can use a User Defined route (UDR) that points to the firewall with the **Virtual network gateway route propagation** option disabled. The **Virtual network gateway route propagation** disabled option prevents route distribution to the spoke subnets. This prevents learned routes from conflicting with your UDR. If you want to keep **Virtual network gateway route propagation** enabled, make sure to define specific routes to the firewall to override those that are published from on-premises over BGP.-- Configure a UDR on the hub gateway subnet that points to the firewall IP address as the next hop to the spoke networks. No UDR is required on the Azure Firewall subnet, as it learns routes from BGP.
+- Set **Use the remote virtual network's gateways or Route Server** when you peer **VNet-Spoke** to **VNet-Hub**. If **Use the remote virtual network's gateways or Route Server** is set and **Use this virtual network's gateway or Route Server** on remote peering is also set, the spoke virtual network uses gateways of the remote virtual network for transit.
+- To route the spoke subnet traffic through the hub firewall, you can use a user-defined route (UDR) that points to the firewall with the **Virtual network gateway route propagation** option disabled. Disabling this option prevents route distribution to the spoke subnets, so learned routes can't conflict with your UDR. If you want to keep **Virtual network gateway route propagation** enabled, make sure that you define specific routes to the firewall to override routes that are published from on-premises over Border Gateway Protocol (BGP).
+- Configure a UDR on the hub gateway subnet that points to the firewall IP address as the next hop to the spoke networks. No UDR is required on the Azure Firewall subnet, because it learns routes from BGP.
-See the [Create Routes](#create-the-routes) section in this article to see how these routes are created.
+The [Create the routes](#create-the-routes) section later in this article shows how to create these routes.
->[!NOTE]
->Azure Firewall must have direct Internet connectivity. If your AzureFirewallSubnet learns a default route to your on-premises network via BGP, you must override this with a 0.0.0.0/0 UDR with the **NextHopType** value set as **Internet** to maintain direct Internet connectivity.
->
->Azure Firewall can be configured to support forced tunneling. For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
+Azure Firewall must have direct internet connectivity. If your **AzureFirewallSubnet** subnet learns a default route to your on-premises network via BGP, you must override it by using a 0.0.0.0/0 UDR with the `NextHopType` value set as `Internet` to maintain direct internet connectivity.
->[!NOTE]
->Traffic between directly peered VNets is routed directly even if a UDR points to Azure Firewall as the default gateway. To send subnet to subnet traffic to the firewall in this scenario, a UDR must contain the target subnet network prefix explicitly on both subnets.
+> [!NOTE]
+> You can configure Azure Firewall to support forced tunneling. For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
+
+Traffic between directly peered virtual networks is routed directly, even if a UDR points to Azure Firewall as the default gateway. To send subnet-to-subnet traffic to the firewall in this scenario, a UDR must contain the target subnet network prefix explicitly on both subnets.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
First, create the resource group to contain the resources: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. On the Azure portal home page, select **Resource groups** > **Create**.
-3. For **Subscription**, select your subscription.
-1. For **Resource group**, type **RG-fw-hybrid-test**.
-2. For **Region**, select a region. All resources that you create later must be in the same region.
-3. Select **Review + Create**.
-4. Select **Create**.
+1. On the Azure portal home page, select **Resource groups** > **Create**.
+1. For **Subscription**, select your subscription.
+1. For **Resource group**, enter **RG-fw-hybrid-test**.
+1. For **Region**, select a region. All resources that you create later must be in the same region.
+1. Select **Review + Create**.
+1. Select **Create**.
-Now, create the virtual network:
+Now, create the virtual network.
> [!NOTE]
-> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
+> The size of the **AzureFirewallSubnet** subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
-1. From the Azure portal home page, select **Create a resource**.
-2. Search for **Virtual network** and select it.
-1. Select **Create**.
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **virtual network**.
+1. Select **Virtual network**, and then select **Create**.
1. For **Resource group**, select **RG-fw-hybrid-test**.
-1. For **Virtual network name**, type **VNet-hub**.
-1. For **Region**, select the region you used previously.
+1. For **Virtual network name**, enter **VNet-Hub**.
+1. For **Region**, select the region that you used previously.
1. Select **Next**. 1. On the **Security** tab, select **Next**.
-1. For **IPv4 Address space**, delete the default address and type **10.5.0.0/16**.
-1. Under **Subnets** delete the **default** subnet.
+1. For **IPv4 Address space**, delete the default address and enter **10.5.0.0/16**.
+1. Under **Subnets**, delete the default subnet.
1. Select **Add a subnet**.
-1. On the **Add a subnet** page, for **Subnet template** select **Azure Firewall**.
+1. On the **Add a subnet** page, for **Subnet template**, select **Azure Firewall**.
1. Select **Add**.
-Now create a second subnet for the gateway.
+Create a second subnet for the gateway:
1. Select **Add a subnet**. 1. For **Subnet template**, select **Virtual Network Gateway**.
-1. For **Starting address**, accept the default value 10.5.1.0.
-1. For **Subnet size**, accept the default value (/27).
+1. For **Starting address**, accept the default value of **10.5.1.0**.
+1. For **Subnet size**, accept the default value of **/27**.
1. Select **Add**. 1. Select **Review + create**. 1. Select **Create**. ## Create the spoke virtual network
-1. From the Azure portal home page, select **Create a resource**.
-2. Search for **Virtual network** and select it.
-1. Select **Create**.
-7. For **Resource group**, select **RG-fw-hybrid-test**.
-1. For **Name**, type **VNet-Spoke**.
-2. For **Region**, select the region you used previously.
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **virtual network**.
+1. Select **Virtual network**, and then select **Create**.
+1. For **Resource group**, select **RG-fw-hybrid-test**.
+1. For **Name**, enter **VNet-Spoke**.
+1. For **Region**, select the region that you used previously.
1. Select **Next**. 1. On the **Security** tab, select **Next**.
-1. For **IPv4 Address space**, delete the default address and type **10.6.0.0/16**.
-1. Under **Subnets** delete the **default** subnet.
+1. For **IPv4 Address space**, delete the default address and enter **10.6.0.0/16**.
+1. Under **Subnets**, delete the default subnet.
1. Select **Add a subnet**.
-1. For **Name**, type **SN-Workload**.
-1. For **Starting address**, accept the default value (10.6.0.0).
-1. For **Subnet size**, accept the default value (/24).
+1. For **Name**, enter **SN-Workload**.
+1. For **Starting address**, accept the default value of **10.6.0.0**.
+1. For **Subnet size**, accept the default value of **/24**.
1. Select **Add**. 1. Select **Review + create**. 1. Select **Create**. ## Create the on-premises virtual network
-1. From the Azure portal home page, select **Create a resource**.
-2. Search for **Virtual network** and select it.
-1. Select **Create**.
-7. For **Resource group**, select **RG-fw-hybrid-test**.
-1. For **Name**, type **VNet-OnPrem**.
-2. For **Region**, select the region you used previously.
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **virtual network**.
+1. Select **Virtual network**, and then select **Create**.
+1. For **Resource group**, select **RG-fw-hybrid-test**.
+1. For **Name**, enter **VNet-Onprem**.
+1. For **Region**, select the region that you used previously.
1. Select **Next**. 1. On the **Security** tab, select **Next**.
-1. For **IPv4 Address space**, delete the default address and type **192.168.0.0/16**.
-1. Under **Subnets** delete the **default** subnet.
+1. For **IPv4 Address space**, delete the default address and enter **192.168.0.0/16**.
+1. Under **Subnets**, delete the default subnet.
1. Select **Add a subnet**.
-1. For **Name**, type **SN-Corp**.
-1. For **Starting address**, accept the default value (192.168.0.0).
-1. For **Subnet size**, accept the default value (/24).
+1. For **Name**, enter **SN-Corp**.
+1. For **Starting address**, accept the default value of **192.168.0.0**.
+1. For **Subnet size**, accept the default value of **/24**.
1. Select **Add**.
-Now create a second subnet for the gateway.
+Now, create a second subnet for the gateway:
1. Select **Add a subnet**. 1. For **Subnet template**, select **Virtual Network Gateway**.
-1. For **Starting address**, accept the default value 192.168.1.0.
-1. For **Subnet size**, accept the default value (/27).
+1. For **Starting address**, accept the default value of **192.168.1.0**.
+1. For **Subnet size**, accept the default value of **/27**.
1. Select **Add**. 1. Select **Review + create**. 1. Select **Create**. -- ## Configure and deploy the firewall
-Now deploy the firewall into the firewall hub virtual network.
+Deploy the firewall into the firewall hub's virtual network:
-1. From the Azure portal home page, select **Create a resource**.
-2. Search for **Firewall** and select it.
-1. Select **Create**.
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **firewall**.
+1. Select **Firewall**, and then select **Create**.
1. On the **Create a Firewall** page, use the following table to configure the firewall: |Setting |Value | |||
- |Subscription |\<your subscription\>|
- |Resource group |**RG-fw-hybrid-test** |
- |Name |**AzFW01**|
- |Region |\<the region you used before\>|
- |Firewall SKU |**Standard**|
- |Firewall management|**Use Firewall rules (classic) to manage this firewall**|
- |Choose a virtual network |**Use existing**:<br> **VNet-hub**|
- |Public IP address |Add new: <br>**fw-pip**. |
-
-5. Select **Review + create**.
-6. Review the summary, and then select **Create** to create the firewall.
-
- This takes a few minutes to deploy.
-7. After deployment completes, go to the **RG-fw-hybrid-test** resource group, and select the **AzFW01** firewall.
-8. Note the private IP address. You use it later when you create the default route.
+ |**Subscription**| Select your subscription.|
+ |**Resource group**|Enter **RG-fw-hybrid-test**. |
+ |**Name**|Enter **AzFW01**.|
+ |**Region**|Select the region that you used before.|
+ |**Firewall SKU** |Select **Standard**.|
+ |**Firewall management**|Select **Use Firewall rules (classic) to manage this firewall**.|
+ |**Choose a virtual network**|Select **Use existing** > **VNet-Hub**.|
+ |**Public IP address**|Select **Add new** > **fw-pip**. |
+
+1. Select **Review + create**.
+1. Review the summary, and then select **Create** to create the firewall.
+
+ The firewall takes a few minutes to deploy.
+1. After deployment finishes, go to the **RG-fw-hybrid-test** resource group and select the **AzFW01** firewall.
+1. Note the private IP address. You use it later when you create the default route.
### Configure network rules
-First, add a network rule to allow web traffic.
+First, add a network rule to allow web traffic:
-1. On the **AzFW01** page, Select **Rules (classic)**.
-2. Select the **Network rule collection** tab.
-3. Select **Add network rule collection**.
-4. For **Name**, type **RCNet01**.
-5. For **Priority**, type **100**.
-6. For **Rule collection action**, select **Allow**.
-6. Under **Rules IP Addresses**, for **Name**, type **AllowWeb**.
-7. For **Protocol**, select **TCP**.
+1. On the **AzFW01** page, select **Rules (classic)**.
+1. Select the **Network rule collection** tab.
+1. Select **Add network rule collection**.
+1. For **Name**, enter **RCNet01**.
+1. For **Priority**, enter **100**.
+1. For **Rule collection action**, select **Allow**.
+1. Under **Rules IP Addresses**, for **Name**, enter **AllowWeb**.
+1. For **Protocol**, select **TCP**.
1. For **Source type**, select **IP address**.
-1. For **Source**, type **192.168.0.0/24**.
+1. For **Source**, enter **192.168.0.0/24**.
1. For **Destination type**, select **IP address**.
-1. For **Destination Address**, type **10.6.0.0/16**.
-1. For **Destination Ports**, type **80**.
---
-Now add a rule to allow RDP traffic.
+1. For **Destination Address**, enter **10.6.0.0/16**.
+1. For **Destination Ports**, enter **80**.
-On the second rule row, type the following information:
+Now, add a rule to allow RDP traffic. On the second rule row, enter the following information:
-1. **Name**, type **AllowRDP**.
+1. For **Name**, enter **AllowRDP**.
1. For **Protocol**, select **TCP**. 1. For **Source type**, select **IP address**.
-1. For **Source**, type **192.168.0.0/24**.
+1. For **Source**, enter **192.168.0.0/24**.
1. For **Destination type**, select **IP address**.
-1. For **Destination Address**, type **10.6.0.0/16**
-1. For **Destination Ports**, type **3389**.
+1. For **Destination Address**, enter **10.6.0.0/16**.
+1. For **Destination Ports**, enter **3389**.
1. Select **Add**. ## Create and connect the VPN gateways
The hub and on-premises virtual networks are connected via VPN gateways.
### Create a VPN gateway for the hub virtual network
-Now create the VPN gateway for the hub virtual network. Network-to-network configurations require a RouteBased VpnType. Creating a VPN gateway can often take 45 minutes or more, depending on the selected VPN gateway SKU.
-
-1. From the Azure portal home page, select **Create a resource**.
-2. In the search text box, type **virtual network gateway**.
-3. Select **Virtual network gateway**, and select **Create**.
-4. For **Name**, type **GW-hub**.
-5. For **Region**, select the same region that you used previously.
-6. For **Gateway type**, select **VPN**.
-7. For **VPN type**, select **Route-based**.
-8. For **SKU**, select **Basic**.
-9. For **Virtual network**, select **VNet-hub**.
-10. For **Public IP address**, select **Create new**, and type **VNet-hub-GW-pip** for the name.
+Create the VPN gateway for the hub virtual network. Network-to-network configurations require a route-based VPN type. Creating a VPN gateway can often take 45 minutes or more, depending on the SKU that you select.
+
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **virtual network gateway**.
+1. Select **Virtual network gateway**, and then select **Create**.
+1. For **Name**, enter **GW-hub**.
+1. For **Region**, select the same region that you used previously.
+1. For **Gateway type**, select **VPN**.
+1. For **VPN type**, select **Route-based**.
+1. For **SKU**, select **Basic**.
+1. For **Virtual network**, select **VNet-Hub**.
+1. For **Public IP address**, select **Create new** and enter **VNet-Hub-GW-pip** for the name.
1. For **Enable active-active mode**, select **Disabled**.
-1. Accept the remaining defaults and then select **Review + create**.
-1. Review the configuration, then select **Create**.
+1. Accept the remaining defaults, and then select **Review + create**.
+1. Review the configuration, and then select **Create**.
### Create a VPN gateway for the on-premises virtual network
-Now create the VPN gateway for the on-premises virtual network. Network-to-network configurations require a RouteBased VpnType. Creating a VPN gateway can often take 45 minutes or more, depending on the selected VPN gateway SKU.
-
-1. From the Azure portal home page, select **Create a resource**.
-2. In the search text box, type **virtual network gateway** and press **Enter**.
-3. Select **Virtual network gateway**, and select **Create**.
-4. For **Name**, type **GW-Onprem**.
-5. For **Region**, select the same region that you used previously.
-6. For **Gateway type**, select **VPN**.
-7. For **VPN type**, select **Route-based**.
-8. For **SKU**, select **Basic**.
-9. For **Virtual network**, select **VNet-Onprem**.
-10. For **Public IP address**, select **Create new**, and type **VNet-Onprem-GW-pip** for the name.
+Create the VPN gateway for the on-premises virtual network. Network-to-network configurations require a route-based VPN type. Creating a VPN gateway can often take 45 minutes or more, depending on the SKU that you select.
+
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **virtual network gateway**.
+1. Select **Virtual network gateway**, and then select **Create**.
+1. For **Name**, enter **GW-Onprem**.
+1. For **Region**, select the same region that you used previously.
+1. For **Gateway type**, select **VPN**.
+1. For **VPN type**, select **Route-based**.
+1. For **SKU**, select **Basic**.
+1. For **Virtual network**, select **VNet-Onprem**.
+1. For **Public IP address**, select **Create new** and enter **VNet-Onprem-GW-pip** for the name.
1. For **Enable active-active mode**, select **Disabled**.
-1. Accept the remaining defaults and then select **Review + create**.
-1. Review the configuration, then select **Create**.
+1. Accept the remaining defaults, and then select **Review + create**.
+1. Review the configuration, and then select **Create**.
### Create the VPN connections Now you can create the VPN connections between the hub and on-premises gateways.
-In this step, you create the connection from the hub virtual network to the on-premises virtual network. You see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
+In the following steps, you create the connection from the hub virtual network to the on-premises virtual network. The examples show a shared key, but you can use your own value for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
1. Open the **RG-fw-hybrid-test** resource group and select the **GW-hub** gateway.
-2. Select **Connections** in the left column.
-3. Select **Add**.
-4. For the connection name, type **Hub-to-Onprem**.
-5. Select **VNet-to-VNet** for **Connection type**.
+1. Select **Connections** in the left column.
+1. Select **Add**.
+1. For the connection name, enter **Hub-to-Onprem**.
+1. For **Connection type**, select **VNet-to-VNet** .
1. Select **Next**.
-1. For the **First virtual network gateway**, select **GW-hub**.
-1. For the **Second virtual network gateway**, select **GW-Onprem**.
-1. For **Shared key (PSK)**, type **AzureA1b2C3**.
+1. For **First virtual network gateway**, select **GW-hub**.
+1. For **Second virtual network gateway**, select **GW-Onprem**.
+1. For **Shared key (PSK)**, enter **AzureA1b2C3**.
1. Select **Review + Create**. 1. Select **Create**.
-Create the on-premises to hub virtual network connection. This step is similar to the previous one, except you create the connection from VNet-Onprem to VNet-hub. Make sure the shared keys match. The connection will be established after a few minutes.
+Create the virtual network connection between on-premises and the hub. The following steps are similar to the previous ones, except that you create the connection from **VNet-Onprem** to **VNet-Hub**. Make sure that the shared keys match. The connection is established after a few minutes.
1. Open the **RG-fw-hybrid-test** resource group and select the **GW-Onprem** gateway.
-2. Select **Connections** in the left column.
-3. Select **Add**.
-4. For the connection name, type **Onprem-to-Hub**.
-5. Select **VNet-to-VNet** for **Connection type**.
-1. Select **Next : Settings**.
-1. For the **First virtual network gateway**, select **GW-Onprem**.
-1. For the **Second virtual network gateway**, select **GW-hub**.
-1. For **Shared key (PSK)**, type **AzureA1b2C3**.
+1. Select **Connections** in the left column.
+1. Select **Add**.
+1. For the connection name, enter **Onprem-to-Hub**.
+1. For **Connection type**, select **VNet-to-VNet**.
+1. Select **Next: Settings**.
+1. For **First virtual network gateway**, select **GW-Onprem**.
+1. For **Second virtual network gateway**, select **GW-hub**.
+1. For **Shared key (PSK)**, enter **AzureA1b2C3**.
1. Select **Review + Create**. 1. Select **Create**.
-#### Verify the connection
+### Verify the connections
-After about five minutes or so, the status of both connections should be **Connected**.
+After about five minutes, the status of both connections should be **Connected**.
-![Gateway connections](media/tutorial-hybrid-portal/gateway-connections.png)
+![Screenshot that shows gateway connections.](media/tutorial-hybrid-portal/gateway-connections.png)
## Peer the hub and spoke virtual networks
-Now peer the hub and spoke virtual networks.
+Now, peer the hub and spoke virtual networks:
+
+1. Open the **RG-fw-hybrid-test** resource group and select the **VNet-Hub** virtual network.
+1. In the left column, select **Peerings**.
+1. Select **Add**.
+1. Under **This virtual network**:
-1. Open the **RG-fw-hybrid-test** resource group and select the **VNet-hub** virtual network.
-2. In the left column, select **Peerings**.
-3. Select **Add**.
-4. Under **This virtual network**:
-
-
|Setting name |Setting | |||
- |Peering link name| HubtoSpoke|
- |Allow traffic to remote virtual network| Selected |
- |Allow traffic forwarded from remote virtual network (allow gateway transit) | Selected |
- |Use remote virtual network gateway or route server | **Not** selected |
-
-5. Under **Remote virtual network**:
+ |**Peering link name**|Enter **HubtoSpoke**.|
+ |**Traffic to remote virtual network**|Select **Allow**.|
+ |**Traffic forwarded from remote virtual network**|Select **Allow**.|
+ |**Virtual network gateway**|Select **Use this virtual network's gateway**.|
+
+1. Under **Remote virtual network**:
|Setting name |Value | |||
- |Peering link name | SpoketoHub|
- |Virtual network deployment model| Resource Manager|
- |Subscription|\<your subscription\>|
- |Virtual network| VNet-Spoke
- |Allow Traffic to current virtual network | Selected |
- |Allow traffic forwarded from current virtual network (allow gateway transit) | Selected |
- |Use current virtual network gateway or route server | Selected |
+ |**Peering link name**|Enter **SpoketoHub**.|
+ |**Virtual network deployment model**|Select **Resource manager**.|
+ |**Subscription**|Select your subscription.|
+ |**Virtual network**|Select **VNet-Spoke**.|
+ |**Traffic to remote virtual network**|Select **Allow**.|
+ |**Traffic forwarded from remote virtual network**|Select **Allow**.|
+ |**Virtual network gateway**|Select **Use the remote virtual network's gateway**.|
-5. Select **Add**.
+1. Select **Add**.
+
+The following screenshot shows the settings to use when you peer hub and spoke virtual networks:
- :::image type="content" source="media/tutorial-hybrid-portal/firewall-peering.png" alt-text="Vnet peering":::
## Create the routes
-Next, create a couple routes:
+In the following steps, you create these routes:
- A route from the hub gateway subnet to the spoke subnet through the firewall IP address - A default route from the spoke subnet through the firewall IP address
-1. From the Azure portal home page, select **Create a resource**.
-2. In the search text box, type **route table** and press **Enter**.
-3. Select **Route table**.
-4. Select **Create**.
-6. Select the **RG-fw-hybrid-test** for the resource group.
-8. For **Region**, select the same location that you used previously.
-1. For the name, type **UDR-Hub-Spoke**.
-9. Select **Review + Create**.
-10. Select **Create**.
-11. After the route table is created, select it to open the route table page.
-12. Select **Routes** in the left column.
-13. Select **Add**.
-14. For the route name, type **ToSpoke**.
+To create the routes:
+
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **route table**.
+1. Select **Route table**, and then select **Create**.
+1. For the resource group, select **RG-fw-hybrid-test**.
+1. For **Region**, select the same location that you used previously.
+1. For the name, enter **UDR-Hub-Spoke**.
+1. Select **Review + Create**.
+1. Select **Create**.
+1. After the route table is created, select it to open the route table page.
+1. Select **Routes** in the left column.
+1. Select **Add**.
+1. For the route name, enter **ToSpoke**.
1. For **Destination type**, select **IP addresses**.
-1. For the **Destination IP addresses/CIDR ranges**, type **10.6.0.0/16**.
-1. For next hop type, select **Virtual appliance**.
-1. For next hop address, type the firewall's private IP address that you noted earlier.
+1. For **Destination IP addresses/CIDR ranges**, enter **10.6.0.0/16**.
+1. For the next hop type, select **Virtual appliance**.
+1. For the next hop address, enter the firewall's private IP address that you noted earlier.
1. Select **Add**.
-Now associate the route to the subnet.
+Now, associate the route to the subnet:
1. On the **UDR-Hub-Spoke - Routes** page, select **Subnets**.
-2. Select **Associate**.
-3. Under **Virtual network**, select **VNet-hub**.
+1. Select **Associate**.
+1. Under **Virtual network**, select **VNet-Hub**.
1. Under **Subnet**, select **GatewaySubnet**.
-2. Select **OK**.
-
-Now create the default route from the spoke subnet.
-
-1. From the Azure portal home page, select **Create a resource**.
-2. In the search text box, type **route table** and press **Enter**.
-3. Select **Route table**.
-5. Select **Create**.
-7. Select the **RG-fw-hybrid-test** for the resource group.
-8. For **Region**, select the same location that you used previously.
-1. For the name, type **UDR-DG**.
-4. For **Propagate gateway route**, select **No**.
-5. Select **Review + Create**.
-6. Select **Create**.
-7. After the route table is created, select it to open the route table page.
-8. Select **Routes** in the left column.
-9. Select **Add**.
-10. For the route name, type **ToHub**.
+1. Select **OK**.
+
+Create the default route from the spoke subnet:
+
+1. On the Azure portal home page, select **Create a resource**.
+1. In the search box, enter **route table**.
+1. Select **Route table**, and then select **Create**.
+1. For the resource group, select **RG-fw-hybrid-test**.
+1. For **Region**, select the same location that you used previously.
+1. For the name, enter **UDR-DG**.
+1. For **Propagate gateway route**, select **No**.
+1. Select **Review + Create**.
+1. Select **Create**.
+1. After the route table is created, select it to open the route table page.
+1. Select **Routes** in the left column.
+1. Select **Add**.
+1. For the route name, enter **ToHub**.
1. For **Destination type**, select **IP addresses**.
-1. For the **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
-1. For next hop type, select **Virtual appliance**.
-1. For next hop address, type the firewall's private IP address that you noted earlier.
+1. For **Destination IP addresses/CIDR ranges**, enter **0.0.0.0/0**.
+1. For the next hop type, select **Virtual appliance**.
+1. For the next hop address, enter the firewall's private IP address that you noted earlier.
1. Select **Add**.
-Now associate the route to the subnet.
+Associate the route to the subnet:
1. On the **UDR-DG - Routes** page, select **Subnets**.
-2. Select **Associate**.
-3. Under **Virtual network**, select **VNet-spoke**.
+1. Select **Associate**.
+1. Under **Virtual network**, select **VNet-Spoke**.
1. Under **Subnet**, select **SN-Workload**.
-2. Select **OK**.
+1. Select **OK**.
## Create virtual machines
-Now create the spoke workload and on-premises virtual machines, and place them in the appropriate subnets.
+Create the spoke workload and on-premises virtual machines, and place them in the appropriate subnets.
### Create the workload virtual machine
-Create a virtual machine in the spoke virtual network, running IIS, with no public IP address.
-
-1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
-3. Enter these values for the virtual machine:
- - **Resource group** - Select **RG-fw-hybrid-test**.
- - **Virtual machine name**: *VM-Spoke-01*.
- - **Region** - Same region that you're used previously.
- - **User name**: \<type a user name\>.
- - **Password**: \<type a password\>
-4. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**
-4. Select **Next:Disks**.
-5. Accept the defaults and select **Next: Networking**.
-6. Select **VNet-Spoke** for the virtual network and the subnet is **SN-Workload**.
-7. For **Public IP**, select **None**.
-9. Select **Next:Management**.
-1. Select **Next : Monitoring**.
-1. For **Boot diagnostics**, Select **Disable**.
+Create a virtual machine in the spoke virtual network that runs Internet Information Services (IIS) and has no public IP address:
+
+1. On the Azure portal home page, select **Create a resource**.
+1. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
+1. Enter these values for the virtual machine:
+ - **Resource group**: Select **RG-fw-hybrid-test**.
+ - **Virtual machine name**: Enter **VM-Spoke-01**.
+ - **Region**: Select the same region that you used previously.
+ - **User name**: Enter a username.
+ - **Password**: Enter a password.
+1. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)** and **RDP (3389)**.
+1. Select **Next: Disks**.
+1. Accept the defaults and select **Next: Networking**.
+1. For the virtual network, select **VNet-Spoke**. The subnet is **SN-Workload**.
+1. For **Public IP**, select **None**.
+1. Select **Next: Management**.
+1. Select **Next: Monitoring**.
+1. For **Boot diagnostics**, select **Disable**.
1. Select **Review+Create**, review the settings on the summary page, and then select **Create**. ### Install IIS
-1. From the Azure portal, open the Cloud Shell and make sure that it's set to **PowerShell**.
-2. Run the following command to install IIS on the virtual machine and change the location if necessary:
+1. On the Azure portal, open Azure Cloud Shell and make sure that it's set to **PowerShell**.
+1. Run the following command to install IIS on the virtual machine, and change the location if necessary:
```azurepowershell-interactive Set-AzVMExtension `
Create a virtual machine in the spoke virtual network, running IIS, with no publ
### Create the on-premises virtual machine
-This is a virtual machine that you use to connect using Remote Desktop to the public IP address. From there, you then connect to the spoke server through the firewall.
-
-1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2019 Datacenter**.
-3. Enter these values for the virtual machine:
- - **Resource group** - Select existing, and then select **RG-fw-hybrid-test**.
- - **Virtual machine name** - *VM-Onprem*.
- - **Region** - Same region that you're used previously.
- - **User name**: \<type a user name\>.
- - **Password**: \<type a user password\>.
-7. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)**
-4. Select **Next:Disks**.
-5. Accept the defaults and select **Next:Networking**.
-6. Select **VNet-Onprem** for virtual network and the subnet is **SN-Corp**.
-8. Select **Next:Management**.
-1. Select **Next : Monitoring**.
-1. For **Boot diagnostics**, Select **Disable**.
+Create a virtual machine that you use to connect via remote access to the public IP address. From there, you can connect to the spoke server through the firewall.
+
+1. On the Azure portal home page, select **Create a resource**.
+1. Under **Popular**, select **Windows Server 2019 Datacenter**.
+1. Enter these values for the virtual machine:
+ - **Resource group**: Select **Existing**, and then select **RG-fw-hybrid-test**.
+ - **Virtual machine name**: Enter **VM-Onprem**.
+ - **Region**: Select the same region that you used previously.
+ - **User name**: Enter a username.
+ - **Password**: Enter a user password.
+1. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)**.
+1. Select **Next: Disks**.
+1. Accept the defaults and select **Next: Networking**.
+1. For the virtual network, select **VNet-Onprem**. The subnet is **SN-Corp**.
+1. Select **Next: Management**.
+1. Select **Next: Monitoring**.
+1. For **Boot diagnostics**, select **Disable**.
1. Select **Review+Create**, review the settings on the summary page, and then select **Create**. [!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] ## Test the firewall
-1. First, note the private IP address for **VM-spoke-01** virtual machine.
+1. Note the private IP address for the **VM-Spoke-01** virtual machine.
-2. From the Azure portal, connect to the **VM-Onprem** virtual machine.
-<!2. Open a Windows PowerShell command prompt on **VM-Onprem**, and ping the private IP for **VM-spoke-01**.
+1. On the Azure portal, connect to the **VM-Onprem** virtual machine.
- You should get a reply.>
-3. Open a web browser on **VM-Onprem**, and browse to http://\<VM-spoke-01 private IP\>.
+1. Open a web browser on **VM-Onprem**, and browse to `http://<VM-Spoke-01 private IP>`.
- You should see the **VM-spoke-01** web page:
- ![VM-Spoke-01 web page](media/tutorial-hybrid-portal/VM-Spoke-01-web.png)
+ The **VM-Spoke-01** webpage should open.
-4. From the **VM-Onprem** virtual machine, open a remote desktop to **VM-spoke-01** at the private IP address.
+ ![Screenshot that shows the webpage for the spoke virtual machine.](media/tutorial-hybrid-portal/VM-Spoke-01-web.png)
- Your connection should succeed, and you should be able to sign in.
+1. From the **VM-Onprem** virtual machine, open a remote access connection to **VM-Spoke-01** at the private IP address.
-So now you've verified that the firewall rules are working:
+ Your connection should succeed, and you should be able to sign in.
-<!- You can ping the server on the spoke VNet.>
-- You can browse web server on the spoke virtual network.-- You can connect to the server on the spoke virtual network using RDP.
+Now that you've verified that the firewall rules are working, you can:
+- Browse to the web server on the spoke virtual network.
+- Connect to the server on the spoke virtual network by using RDP.
-Next, change the firewall network rule collection action to **Deny** to verify that the firewall rules work as expected.
+Next, change the action for the collection of firewall network rules to **Deny**, to verify that the firewall rules work as expected:
1. Select the **AzFW01** firewall. 2. Select **Rules (classic)**.
-3. Select the **Network rule collection** tab and select the **RCNet01** rule collection.
+3. Select the **Network rule collection** tab, and select the **RCNet01** rule collection.
4. For **Action**, select **Deny**. 5. Select **Save**.
-Close any existing remote desktops before testing the changed rules. Now run the tests again. They should all fail this time.
+Close any existing remote access connections. Run the tests again to test the changed rules. They should all fail this time.
## Clean up resources
-You can keep your firewall resources for further testing, or if no longer needed, delete the **RG-fw-hybrid-test** resource group to delete all firewall-related resources.
+You can keep your firewall resources for further testing. If you no longer need them, delete the **RG-fw-hybrid-test** resource group to delete all firewall-related resources.
## Next steps
-Next, you can monitor the Azure Firewall logs.
-
-[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+[Monitor Azure Firewall logs](./firewall-diagnostics.md)
firewall Tutorial Hybrid Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-ps.md
Title: Deploy & configure Azure Firewall in hybrid network using PowerShell
-description: In this article, you learn how to deploy and configure Azure Firewall using Azure PowerShell.
+ Title: Deploy and configure Azure Firewall in a hybrid network by using PowerShell
+description: In this article, you learn how to deploy and configure Azure Firewall by using Azure PowerShell.
#Customer intent: As an administrator, I want to control network access from an on-premises network to an Azure virtual network.
-# Deploy and configure Azure Firewall in a hybrid network using Azure PowerShell
+
+# Deploy and configure Azure Firewall in a hybrid network by using Azure PowerShell
When you connect your on-premises network to an Azure virtual network to create a hybrid network, the ability to control access to your Azure network resources is an important part of an overall security plan.
-You can use Azure Firewall to control network access in a hybrid network using rules that define allowed and denied network traffic.
+You can use Azure Firewall to control network access in a hybrid network by using rules that define allowed and denied network traffic.
For this article, you create three virtual networks: -- **VNet-Hub** - the firewall is in this virtual network.-- **VNet-Spoke** - the spoke virtual network represents the workload located on Azure.-- **VNet-Onprem** - The on-premises virtual network represents an on-premises network. In an actual deployment, it can be connected by either a VPN or ExpressRoute connection. For simplicity, this article uses a VPN gateway connection, and an Azure-located virtual network is used to represent an on-premises network.-
-![Firewall in a hybrid network](media/tutorial-hybrid-ps/hybrid-network-firewall.png)
+- **VNet-Hub**: The firewall is in this virtual network.
+- **VNet-Spoke**: The spoke virtual network represents the workload located on Azure.
+- **VNet-Onprem**: The on-premises virtual network represents an on-premises network. In an actual deployment, you can connect to it by using either a virtual private network (VPN) connection or an Azure ExpressRoute connection. For simplicity, this article uses a VPN gateway connection, and an Azure-located virtual network represents an on-premises network.
-In this article, you learn how to:
+![Diagram that shows a firewall in a hybrid network.](media/tutorial-hybrid-ps/hybrid-network-firewall.png)
-* Declare the variables
-* Create the firewall hub virtual network
-* Create the spoke virtual network
-* Create the on-premises virtual network
-* Configure and deploy the firewall
-* Create and connect the VPN gateways
-* Peer the hub and spoke virtual networks
-* Create the routes
-* Create the virtual machines
-* Test the firewall
-
-If you want to use Azure portal instead to complete this tutorial, see [Tutorial: Deploy and configure Azure Firewall in a hybrid network using the Azure portal](tutorial-hybrid-portal.md).
+If you want to use the Azure portal instead to complete the procedures in this article, see [Deploy and configure Azure Firewall in a hybrid network by using the Azure portal](tutorial-hybrid-portal.md).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-This article requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
+This article requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
There are three key requirements for this scenario to work correctly: -- A User Defined Route (UDR) on the spoke subnet that points to the Azure Firewall IP address as the default gateway. Virtual network gateway route propagation must be **Disabled** on this route table.
+- A user-defined route (UDR) on the spoke subnet that points to the Azure Firewall IP address as the default gateway. Virtual network gateway route propagation must be *disabled* on this route table.
- A UDR on the hub gateway subnet must point to the firewall IP address as the next hop to the spoke networks.
- No UDR is required on the Azure Firewall subnet, as it learns routes from BGP.
-- Make sure to set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke and **UseRemoteGateways** when peering VNet-Spoke to VNet-Hub.
+ No UDR is required on the Azure Firewall subnet, because it learns routes from Border Gateway Protocol (BGP).
+- Be sure to set `AllowGatewayTransit` when you're peering **VNet-Hub** to **VNet-Spoke**. Set `UseRemoteGateways` when you're peering **VNet-Spoke** to **VNet-Hub**.
-See the [Create Routes](#create-the-routes) section in this article to see how these routes are created.
+The [Create the routes](#create-the-routes) section later in this article shows how to create these routes.
>[!NOTE]
->Azure Firewall must have direct Internet connectivity. If your AzureFirewallSubnet learns a default route to your on-premises network via BGP, you must configure Azure Firewall in forced tunneling mode. If this is an existing Azure Firewall, which cannot be reconfigured in forced tunneling mode, it is recommended to add a 0.0.0.0/0 UDR on the AzureFirewallSubnet with the **NextHopType** value set as **Internet** to maintain direct Internet connectivity.
+>Azure Firewall must have direct internet connectivity. If your **AzureFirewallSubnet** subnet learns a default route to your on-premises network via BGP, you must configure Azure Firewall in forced tunneling mode. If this is an existing Azure Firewall instance that can't be reconfigured in forced tunneling mode, we recommend that you add a 0.0.0.0/0 UDR on the **AzureFirewallSubnet** subnet with the `NextHopType` value set as `Internet` to maintain direct internet connectivity.
> >For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
->[!NOTE]
->Traffic between directly peered VNets is routed directly even if a UDR points to Azure Firewall as the default gateway. To send subnet to subnet traffic to the firewall in this scenario, a UDR must contain the target subnet network prefix explicitly on both subnets.
+Traffic between directly peered virtual networks is routed directly, even if a UDR points to Azure Firewall as the default gateway. To send subnet-to-subnet traffic to the firewall in this scenario, a UDR must contain the target subnet network prefix explicitly on both subnets.
-To review the related Azure PowerShell reference documentation, see [Azure PowerShell Reference](/powershell/module/az.network/new-azfirewall).
+To review the related Azure PowerShell reference documentation, see [New-AzFirewall](/powershell/module/az.network/new-azfirewall).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Declare the variables
-The following example declares the variables using the values for this article. In some cases, you might need to replace some values with your own to work in your subscription. Modify the variables if needed, then copy and paste them into your PowerShell console.
+The following example declares the variables by using the values for this article. In some cases, you might need to replace some values with your own to work in your subscription. Modify the variables if needed, and then copy and paste them into your PowerShell console.
```azurepowershell $RG1 = "FW-Hybrid-Test" $Location1 = "East US"
-# Variables for the firewall hub VNet
+# Variables for the firewall hub virtual network
-$VNetnameHub = "VNet-hub"
+$VNetnameHub = "VNet-Hub"
$SNnameHub = "AzureFirewallSubnet" $VNetHubPrefix = "10.5.0.0/16" $SNHubPrefix = "10.5.0.0/24" $SNGWHubPrefix = "10.5.1.0/24" $GWHubName = "GW-hub"
-$GWHubpipName = "VNet-hub-GW-pip"
+$GWHubpipName = "VNet-Hub-GW-pip"
$GWIPconfNameHub = "GW-ipconf-hub" $ConnectionNameHub = "hub-to-Onprem"
$GWOnprempipName = "VNet-Onprem-GW-pip"
$SNnameGW = "GatewaySubnet" ``` - ## Create the firewall hub virtual network First, create the resource group to contain the resources for this article:
$FWsub = New-AzVirtualNetworkSubnetConfig -Name $SNnameHub -AddressPrefix $SNHub
$GWsub = New-AzVirtualNetworkSubnetConfig -Name $SNnameGW -AddressPrefix $SNGWHubPrefix ```
-Now, create the firewall hub virtual network:
+Create the firewall hub virtual network:
```azurepowershell $VNetHub = New-AzVirtualNetwork -Name $VNetnameHub -ResourceGroupName $RG1 ` -Location $Location1 -AddressPrefix $VNetHubPrefix -Subnet $FWsub,$GWsub ```
-Request a public IP address to be allocated to the VPN gateway you'll create for your virtual network. Notice that the *AllocationMethod* is **Dynamic**. You can't specify the IP address that you want to use. It's dynamically allocated to your VPN gateway.
+Request a public IP address to be allocated to the VPN gateway that you'll create for your virtual network. Notice that the `AllocationMethod` value is `Dynamic`. You can't specify the IP address that you want to use. It's dynamically allocated to your VPN gateway.
- ```azurepowershell
- $gwpip1 = New-AzPublicIpAddress -Name $GWHubpipName -ResourceGroupName $RG1 `
- -Location $Location1 -AllocationMethod Dynamic
+```azurepowershell
+$gwpip1 = New-AzPublicIpAddress -Name $GWHubpipName -ResourceGroupName $RG1 `
+-Location $Location1 -AllocationMethod Dynamic
``` ## Create the spoke virtual network
$Onpremsub = New-AzVirtualNetworkSubnetConfig -Name $SNNameOnprem -AddressPrefix
$GWOnpremsub = New-AzVirtualNetworkSubnetConfig -Name $SNnameGW -AddressPrefix $SNGWOnpremPrefix ```
-Now, create the on-premises virtual network:
+Create the on-premises virtual network:
```azurepowershell $VNetOnprem = New-AzVirtualNetwork -Name $VNetnameOnprem -ResourceGroupName $RG1 ` -Location $Location1 -AddressPrefix $VNetOnpremPrefix -Subnet $Onpremsub,$GWOnpremsub ```
-Request a public IP address to be allocated to the gateway you'll create for the virtual network. Notice that the *AllocationMethod* is **Dynamic**. You can't specify the IP address that you want to use. It's dynamically allocated to your gateway.
+Request a public IP address to be allocated to the gateway that you'll create for the virtual network. Notice that the `AllocationMethod` value is `Dynamic`. You can't specify the IP address that you want to use. It's dynamically allocated to your gateway.
- ```azurepowershell
- $gwOnprempip = New-AzPublicIpAddress -Name $GWOnprempipName -ResourceGroupName $RG1 `
- -Location $Location1 -AllocationMethod Dynamic
+```azurepowershell
+$gwOnprempip = New-AzPublicIpAddress -Name $GWOnprempipName -ResourceGroupName $RG1 `
+-Location $Location1 -AllocationMethod Dynamic
``` ## Configure and deploy the firewall
-Now deploy the firewall into the hub virtual network.
+Now, deploy the firewall into the hub virtual network:
```azurepowershell
-# Get a Public IP for the firewall
+# Get a public IP for the firewall
$FWpip = New-AzPublicIpAddress -Name "fw-pip" -ResourceGroupName $RG1 ` -Location $Location1 -AllocationMethod Static -Sku Standard # Create the firewall
$AzfwPrivateIP
```
-### Configure network rules
-
-<! $Rule3 = New-AzFirewallNetworkRule -Name "AllowPing" -Protocol ICMP -SourceAddress $SNOnpremPrefix `
- -DestinationAddress $VNetSpokePrefix -DestinationPort *>
+Configure network rules:
```azurepowershell $Rule1 = New-AzFirewallNetworkRule -Name "AllowWeb" -Protocol TCP -SourceAddress $SNOnpremPrefix `
$Rule1 = New-AzFirewallNetworkRule -Name "AllowWeb" -Protocol TCP -SourceAddress
$Rule2 = New-AzFirewallNetworkRule -Name "AllowRDP" -Protocol TCP -SourceAddress $SNOnpremPrefix ` -DestinationAddress $VNetSpokePrefix -DestinationPort 3389
+$Rule3 = New-AzFirewallNetworkRule -Name "AllowPing" -Protocol ICMP -SourceAddress $SNOnpremPrefix `
+ -DestinationAddress $VNetSpokePrefix -DestinationPort
+ $NetRuleCollection = New-AzFirewallNetworkRuleCollection -Name RCNet01 -Priority 100 ` -Rule $Rule1,$Rule2 -ActionType "Allow" $Azfw.NetworkRuleCollections = $NetRuleCollection
The hub and on-premises virtual networks are connected via VPN gateways.
### Create a VPN gateway for the hub virtual network
-Create the VPN gateway configuration. The VPN gateway configuration defines the subnet and the public IP address to use.
+Create the VPN gateway configuration for the hub virtual network. The VPN gateway configuration defines the subnet and the public IP address to use.
- ```azurepowershell
- $vnet1 = Get-AzVirtualNetwork -Name $VNetnameHub -ResourceGroupName $RG1
- $subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet1
- $gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfNameHub `
- -Subnet $subnet1 -PublicIpAddress $gwpip1
- ```
+```azurepowershell
+$vnet1 = Get-AzVirtualNetwork -Name $VNetnameHub -ResourceGroupName $RG1
+$subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet1
+$gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfNameHub `
+-Subnet $subnet1 -PublicIpAddress $gwpip1
+```
-Now create the VPN gateway for the hub virtual network. Network-to-network configurations require a RouteBased VpnType. Creating a VPN gateway can often take 45 minutes or more, depending on the selected VPN gateway SKU.
+Now, create the VPN gateway for the hub virtual network. Network-to-network configurations require a `VpnType` value of `RouteBased`. Creating a VPN gateway can often take 45 minutes or more, depending on the SKU that you select.
```azurepowershell New-AzVirtualNetworkGateway -Name $GWHubName -ResourceGroupName $RG1 `
New-AzVirtualNetworkGateway -Name $GWHubName -ResourceGroupName $RG1 `
### Create a VPN gateway for the on-premises virtual network
-Create the VPN gateway configuration. The VPN gateway configuration defines the subnet and the public IP address to use.
+Create the VPN gateway configuration for the on-premises virtual network. The VPN gateway configuration defines the subnet and the public IP address to use.
- ```azurepowershell
+```azurepowershell
$vnet2 = Get-AzVirtualNetwork -Name $VNetnameOnprem -ResourceGroupName $RG1 $subnet2 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet2 $gwipconf2 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfNameOnprem `
- -Subnet $subnet2 -PublicIpAddress $gwOnprempip
- ```
+-Subnet $subnet2 -PublicIpAddress $gwOnprempip
+```
-Now create the VPN gateway for the on-premises virtual network. Network-to-network configurations require a RouteBased VpnType. Creating a VPN gateway can often take 45 minutes or more, depending on the selected VPN gateway SKU.
+Now, create the VPN gateway for the on-premises virtual network. Network-to-network configurations require a `VpnType` value of `RouteBased`. Creating a VPN gateway can often take 45 minutes or more, depending on the SKU that you select.
```azurepowershell New-AzVirtualNetworkGateway -Name $GWOnpremName -ResourceGroupName $RG1 `
New-AzVirtualNetworkGateway -Name $GWOnpremName -ResourceGroupName $RG1 `
### Create the VPN connections
-Now you can create the VPN connections between the hub and on-premises gateways
+Create the VPN connections between the hub and on-premises gateways.
#### Get the VPN gateways
$vnetOnpremgw = Get-AzVirtualNetworkGateway -Name $GWOnpremName -ResourceGroupNa
#### Create the connections
-In this step, you create the connection from the hub virtual network to the on-premises virtual network. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
+In this step, you create the connection from the hub virtual network to the on-premises virtual network. The examples show a shared key, but you can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
```azurepowershell New-AzVirtualNetworkGatewayConnection -Name $ConnectionNameHub -ResourceGroupName $RG1 ` -VirtualNetworkGateway1 $vnetHubgw -VirtualNetworkGateway2 $vnetOnpremgw -Location $Location1 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```
-Create the on-premises to hub virtual network connection. This step is similar to the previous one, except you create the connection from VNet-Onprem to VNet-hub. Make sure the shared keys match. The connection will be established after a few minutes.
- ```azurepowershell
- New-AzVirtualNetworkGatewayConnection -Name $ConnectionNameOnprem -ResourceGroupName $RG1 `
- -VirtualNetworkGateway1 $vnetOnpremgw -VirtualNetworkGateway2 $vnetHubgw -Location $Location1 `
- -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3'
- ```
+Create the virtual network connection from on-premises to the hub. This step is similar to the previous one, except that you create the connection from **VNet-Onprem** to **VNet-Hub**. Make sure that the shared keys match. The connection is established after a few minutes.
+
+```azurepowershell
+New-AzVirtualNetworkGatewayConnection -Name $ConnectionNameOnprem -ResourceGroupName $RG1 `
+-VirtualNetworkGateway1 $vnetOnpremgw -VirtualNetworkGateway2 $vnetHubgw -Location $Location1 `
+-ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3'
+```
#### Verify the connection
-You can verify a successful connection by using the *Get-AzVirtualNetworkGatewayConnection* cmdlet, with or without *-Debug*.
-Use the following cmdlet example, configuring the values to match your own. If prompted, select **A** to run **All**. In the example, *-Name* refers to the name of the connection that you want to test.
+You can verify a successful connection by using the `Get-AzVirtualNetworkGatewayConnection` cmdlet, with or without `-Debug`.
+
+Use the following cmdlet example, but configure the values to match your own. If you're prompted, select `A` to run `All`. In the example, `-Name` refers to the name of the connection that you want to test.
```azurepowershell Get-AzVirtualNetworkGatewayConnection -Name $ConnectionNameHub -ResourceGroupName $RG1 ```
-After the cmdlet finishes, view the values. In the following example, the connection status shows as *Connected* and you can see ingress and egress bytes.
+After the cmdlet finishes, view the values. The following example shows a connection status of `Connected`, along with ingress and egress bytes:
-```
+```output
"connectionStatus": "Connected", "ingressBytesTransferred": 33509044, "egressBytesTransferred": 4142431
After the cmdlet finishes, view the values. In the following example, the connec
## Peer the hub and spoke virtual networks
-Now peer the hub and spoke virtual networks.
+Now, peer the hub and spoke virtual networks:
```azurepowershell # Peer hub to spoke
Add-AzVirtualNetworkPeering -Name SpoketoHub -VirtualNetwork $VNetSpoke -RemoteV
## Create the routes
-Next, create a couple routes:
+Use the following commands to create these routes:
- A route from the hub gateway subnet to the spoke subnet through the firewall IP address - A default route from the spoke subnet through the firewall IP address
Set-AzVirtualNetworkSubnetConfig `
-RouteTable $routeTableHubSpoke | ` Set-AzVirtualNetwork
-#Now create the default route
+#Now, create the default route
#Create a table, with BGP route propagation disabled. The property is now called "Virtual network gateway route propagation," but the API still refers to the parameter as "DisableBgpRoutePropagation." $routeTableSpokeDG = New-AzRouteTable `
Set-AzVirtualNetwork
## Create virtual machines
-Now create the spoke workload and on-premises virtual machines, and place them in the appropriate subnets.
+Create the spoke workload and on-premises virtual machines, and place them in the appropriate subnets.
### Create the workload virtual machine
-Create a virtual machine in the spoke virtual network, running IIS, with no public IP address, and allows pings in.
-When prompted, type a user name and password for the virtual machine.
+Create a virtual machine in the spoke virtual network that runs Internet Information Services (IIS), has no public IP address, and allows pings in. When you're prompted, enter a username and password for the virtual machine.
```azurepowershell # Create an inbound network security group rule for ports 3389 and 80
Set-AzVMExtension `
-TypeHandlerVersion 1.4 ` -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server"}' ` -Location $Location1
-```
-<!#Create a host firewall rule to allow ping in
+#Create a host firewall rule to allow pings in
Set-AzVMExtension ` -ResourceGroupName $RG1 ` -ExtensionName IIS `
Set-AzVMExtension `
-ExtensionType CustomScriptExtension ` -TypeHandlerVersion 1.4 ` -SettingString '{"commandToExecute":"powershell New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4"}' `
- -Location $Location1>
+ -Location $Location1
+```
### Create the on-premises virtual machine
-This is a simple virtual machine that you use to connect using Remote Desktop to the public IP address. From there, you then connect to the on-premises server through the firewall. When prompted, type a user name and password for the virtual machine.
+Create a simple virtual machine that you can use to connect via remote access to the public IP address. From there, you can connect to the on-premises server through the firewall. When you're prompted, enter a username and password for the virtual machine.
```azurepowershell New-AzVm `
New-AzVm `
## Test the firewall
-First, get and then note the private IP address for **VM-spoke-01** virtual machine.
+1. Get and then note the private IP address for the **VM-spoke-01** virtual machine:
-```azurepowershell
-$NIC.IpConfigurations.privateipaddress
-```
-
-From the Azure portal, connect to the **VM-Onprem** virtual machine.
-<!2. Open a Windows PowerShell command prompt on **VM-Onprem**, and ping the private IP for **VM-spoke-01**.
+ ```azurepowershell
+ $NIC.IpConfigurations.privateipaddress
+ ```
- You should get a reply.>
-Open a web browser on **VM-Onprem**, and browse to http://\<VM-spoke-01 private IP\>.
+1. From the Azure portal, connect to the **VM-Onprem** virtual machine.
-You should see the Internet Information Services default page.
+1. Open a Windows PowerShell command prompt on **VM-Onprem**, and ping the private IP for **VM-spoke-01**. You should get a reply.
-From **VM-Onprem**, open a remote desktop to **VM-spoke-01** at the private IP address.
+1. Open a web browser on **VM-Onprem**, and browse to `http://<VM-spoke-01 private IP>`. The IIS default page should open.
-Your connection should succeed, and you should be able to sign in using your chosen username and password.
+1. From **VM-Onprem**, open a remote access connection to **VM-spoke-01** at the private IP address. Your connection should succeed, and you should be able to sign in by using your chosen username and password.
-So now you've verified that the firewall rules are working:
+Now that you've verified that the firewall rules are working, you can:
-<!- You can ping the server on the spoke VNet.>
-- You can browse web server on the spoke virtual network.-- You can connect to the server on the spoke virtual network using RDP.
+- Ping the server on the spoke virtual network.
+- Browse to web server on the spoke virtual network.
+- Connect to the server on the spoke virtual network by using RDP.
-Next, change the firewall network rule collection action to **Deny** to verify that the firewall rules work as expected. Run the following script to change the rule collection action to **Deny**.
+Next, run the following script to change the action for the collection of firewall network rules to `Deny`:
```azurepowershell $rcNet = $azfw.GetNetworkRuleCollectionByName("RCNet01")
$rcNet.action.type = "Deny"
Set-AzFirewall -AzureFirewall $azfw ```
-Now run the tests again. They should all fail this time. Close any existing remote desktops before testing the changed rules.
+Close any existing remote access connections. Run the tests again to test the changed rules. They should all fail this time.
## Clean up resources
-You can keep your firewall resources for the next tutorial, or if no longer needed, delete the **FW-Hybrid-Test** resource group to delete all firewall-related resources.
+You can keep your firewall resources for the next tutorial. If you no longer need them, delete the **FW-Hybrid-Test** resource group to delete all firewall-related resources.
## Next steps
-Next, you can monitor the Azure Firewall logs.
-
-[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+[Monitor Azure Firewall logs](./firewall-diagnostics.md)
frontdoor Front Door Http Headers Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-http-headers-protocol.md
Azure Front Door includes headers for an incoming request unless they're removed
| X-Azure-SocketIP | *X-Azure-SocketIP: 127.0.0.1* </br> Represents the socket IP address associated with the TCP connection that the current request originated from. A request's client IP address might not be equal to its socket IP address because the client IP can be arbitrarily overwritten by a user.| | X-Azure-Ref | *X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz* </br> A unique reference string that identifies a request served by Front Door. It's used to search access logs and critical for troubleshooting.| | X-Azure-RequestChain | *X-Azure-RequestChain: hops=1* </br> A header that Front Door uses to detect request loops, and users shouldn't take a dependency on it. |
-| X-Azure-FDID | *X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da* <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-) |
+| X-Azure-FDID | *X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da* <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-) |
| X-Forwarded-For | *X-Forwarded-For: 127.0.0.1* </br> The X-Forwarded-For (XFF) HTTP header field often identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. If there's an existing XFF header, then Front Door appends the client socket IP to it or adds the XFF header with the client socket IP. | | X-Forwarded-Host | *X-Forwarded-Host: contoso.azurefd.net* </br> The X-Forwarded-Host HTTP header field is a common method used to identify the original host requested by the client in the Host HTTP request header. This is because the host name from Front Door may differ for the backend server handling the request. Any previous value will be overridden by Front Door. | | X-Forwarded-Proto | *X-Forwarded-Proto: http* </br> The X-Forwarded-Proto HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. Any previous value will be overridden by Front Door. |
frontdoor Front Door Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-waf.md
Finally, if you're using a custom domain to reach your web application and want
## Lock down your web application
-We recommend you ensure only Azure Front Door edges can communicate with your web application. Doing so will ensure no one can bypass the Azure Front Door protection and access your application directly. To accomplish this lockdown, see [How do I lock down the access to my backend to only Azure Front Door?](./front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-).
+We recommend you ensure only Azure Front Door edges can communicate with your web application. Doing so will ensure no one can bypass the Azure Front Door protection and access your application directly. To accomplish this lockdown, see [How do I lock down the access to my backend to only Azure Front Door?](./front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-).
## Clean up resources
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md
Get-AzPolicyDefinition -Name 'd7fff7ea-9d47-4952-b854-b7da261e48f2' | ConvertTo-
## Export to CSV with Resource Graph in Azure Portal
-Azure Resource Graph gives the ability to query at scale with complex filtering, grouping and sorting. Azure Resource Graph supports the policy resources table, which supports querying policy resources such as definitions, assignments and exemptions. Review our [sample queries.](../../resource-graph/samples/samples-by-table.md#policyresources) Resource Graph explorer portal experience allows downloads of query results to csv using the ["Download to CSV"](../../resource-graph/first-query-portal.md#download-query-results-as-a-csv-file) toolbar option.
+Azure Resource Graph gives the ability to query at scale with complex filtering, grouping and sorting. Azure Resource Graph supports the policy resources table, which contains policy resources such as definitions, assignments and exemptions. Review our [sample queries.](../../resource-graph/samples/samples-by-table.md#policyresources)
+The Resource Graph explorer portal experience allows downloads of query results to CSV using the ["Download to CSV"](../../resource-graph/first-query-portal.md#download-query-results-as-a-csv-file) toolbar option.
## Next steps
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API For FHIR supports $export at the following levels:
* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>` * [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
-With export, data is exported in multiple files each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It is currently set to 50,000, but can change. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`).
+With export, data is exported in multiple files each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It is currently set to 5,000, but can change. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`).
> [!NOTE] > `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
iot-hub-device-update Device Update Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-data-encryption.md
+
+ Title: Data encryption in Device Update for Azure IoT Hub
+description: Understand how Device Update for IoT Hub encrypts data.
++ Last updated : 09/22/2023++++
+# Data encryption for Device Update for IoT Hub
++
+Device Update for IoT Hub provides data protection through encryption at rest and in-transit as it's written in the datastores; the data is encrypted when read and decrypted when written.
+Data in a new Device Update account is encrypted with Microsoft-managed keys by default.
++
+Device Update also supports use of your own encryption keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Customer-managed keys offer greater flexibility to manage access controls.
+
+You must use one of the following Azure key stores to store your customer-managed keys:
+- Azure Key Vault
+- Azure Key Vault Managed Hardware Security Module (HSM)
+
+You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure Key Vault APIs to generate keys. The CMK is then used for all the instances in the Device Update account.
+
+> [!NOTE]
+> This capability requires the creation of a new Device Update Account and Instance ΓÇô Standard SKU. This is not available for the free SKU of Device update.
logic-apps Biztalk Server To Azure Integration Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md
You can install and run BizTalk Server on your own hardware, on-premises virtual
- Availability and redundancy
- In Azure, [availability zones](../reliability/availability-zones-overview.md#availability-zones) provide resiliency, distributed availability, and active-active-active zone scalability. To increase availability for your logic app workloads, you can [enable availability zone support](./set-up-zone-redundancy-availability-zones.md), but only when you create your logic app. You'll need at least three separate availability zones in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information, see [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+ In Azure, [availability zones](../reliability/availability-zones-overview.md#zonal-and-zone-redundant-services) provide resiliency, distributed availability, and active-active-active zone scalability. To increase availability for your logic app workloads, you can [enable availability zone support](./set-up-zone-redundancy-availability-zones.md), but only when you create your logic app. You'll need at least three separate availability zones in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information, see [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
- Isolated and dedicated environment
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
Title: 'Quickstart: Create a Data Science VM - Resource Manager template'
description: In this quickstart, you use an Azure Resource Manager template to quickly deploy a Data Science Virtual Machine --++ Last updated 06/10/2020
machine-learning How To Debug Pipeline Reuse Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipeline-reuse-issues.md
Azure Machine Learning pipeline has holistic logic to calculate whether a compon
Reuse criteria: -- Component definition `is_determinstic` = true
+- Component definition `is_deterministic` = true
- Pipeline runtime setting `ForceReRun` = false - Component code, environment definition, inputs and parameters, output settings, and run settings are all the same.
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Previously updated : 07/17/2023 Last updated : 09/18/2023 reviewer: msakande
Before following the steps in this article, make sure you have the following pre
### Virtual machine quota allocation for deployment
-For managed online endpoints, Azure Machine Learning reserves 20% of your compute resources for performing upgrades. Therefore, if you request a given number of instances in a deployment, you must have a quota for `ceil(1.2 * number of instances requested for deployment) * number of cores for the VM SKU` available to avoid getting an error. For example, if you request 10 instances of a [Standard_DS3_v2](/azure/virtual-machines/dv2-dsv2-series) VM (that comes with 4 cores) in a deployment, you should have a quota for 48 cores (`12 instances * 4 cores`) available. To view your usage and request quota increases, see [View your usage and quotas in the Azure portal](how-to-manage-quotas.md#view-your-usage-and-quotas-in-the-azure-portal).
+For managed online endpoints, Azure Machine Learning reserves 20% of your compute resources for performing upgrades on some VM SKUs. If you request a given number of instances in a deployment, you must have a quota for `ceil(1.2 * number of instances requested for deployment) * number of cores for the VM SKU` available to avoid getting an error. For example, if you request 10 instances of a [Standard_DS3_v2](/azure/virtual-machines/dv2-dsv2-series) VM (that comes with 4 cores) in a deployment, you should have a quota for 48 cores (`12 instances * 4 cores`) available. To view your usage and request quota increases, see [View your usage and quotas in the Azure portal](how-to-manage-quotas.md#view-your-usage-and-quotas-in-the-azure-portal).
-<!-- In this tutorial, you'll request one instance of a Standard_DS2_v2 VM SKU (that comes with 2 cores) in your deployment; therefore, you should have a minimum quota for 4 cores (`2 instances*2 cores`) available. -->
-+
+Azure Machine Learning provides a [shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota) pool from which all users can access quota to perform testing for a limited time. When you use the studio to deploy Llama models (from the model catalog) to a managed online endpoint, Azure Machine Learning allows you to access this shared quota for a short time.
+
+To deploy a _Llama-2-70b_ or _Llama-2-70b-chat_ model, however, you must have an [Enterprise Agreement subscription](/azure/cost-management-billing/manage/create-enterprise-subscription) before you can deploy using the shared quota. For more information on how to use the shared quota for online endpoint deployment, see [How to deploy foundation models using the studio](how-to-use-foundation-models.md#deploying-using-the-studio).
## Prepare your system
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
Since the scoring script and environment are automatically included with the fou
:::image type="content" source="./media/how-to-use-foundation-models/deploy-options.png" alt-text="Screenshot showing the deploy options on the foundation model card after user selects the deploy button.":::
+If you're deploying a Llama model from the model catalog but don't have enough quota available for the deployment, Azure Machine Learning allows you to use quota from a shared quota pool for a limited time. For _Llama-2-70b_ and _Llama-2-70b-chat_ model deployment, access to the shared quota is available only to customers with [Enterprise Agreement subscriptions](/azure/cost-management-billing/manage/create-enterprise-subscription). For more information on shared quota, see [Azure Machine Learning shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota).
++ ### Deploying using code based samples To enable users to quickly get started with deployment and inferencing, we have published samples in the [Inference samples in the azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/inference). The published samples include Python notebooks and CLI examples. Each model card also links to Inference samples for Real time and Batch inferencing.
machine-learning How To Customize Environment Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-customize-environment-runtime.md
RUN pip install -r requirements.txt
``` > [!NOTE]
-> This docker image should be built from prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime/tags/list).
+> This docker image should be built from prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime/tags/list).
### Step 2: Create custom Azure Machine Learning environment
from azure.ai.ml.entities import CustomApplications, ImageSettings, EndpointsSet
ml_client = MLClient.from_config(credential=credential)
-image = ImageSettings(reference='mcr.microsoft.com/azureml/promptflow/promptflow-runtime:<newest_version>')
+image = ImageSettings(reference='mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<newest_version>')
endpoints = [EndpointsSettings(published=8081, target=8080)]
Follow [this document to add custom application](../how-to-create-compute-instan
## Next steps - [Develop a standard flow](how-to-develop-a-standard-flow.md)-- [Develop a chat flow](how-to-develop-a-chat-flow.md)
+- [Develop a chat flow](how-to-develop-a-chat-flow.md)
machine-learning How To End To End Llmops With Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md
Go to workspace portal, select **Prompt flow** -> **Runtime** -> **Add**, then f
Clone repo to your local machine. ```bash
- git clone https://github.com/<user-name>/llmops-pipeline
+ git clone https://github.com/<user-name>/llmops-gha-demo
``` ### Update workflow to connect to your Azure Machine Learning workspace
managed-instance-apache-cassandra Resilient Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/resilient-applications.md
+
+ Title: Building resilient applications
+
+description: Learn about best practices for high availability and disaster recovery for Azure Managed Instance for Apache Cassandra
+++ Last updated : 09/21/2023+
+keywords: azure high availability disaster recovery cassandra resiliency
++
+# Best practices for high availability and disaster recovery
+
+Azure Managed Instance for Apache Cassandra provides automated deployment and scaling operations for managed open-source Apache Cassandra datacenters. Apache Cassandra is a great choice for building highly resilient applications due to it's distributed nature and masterless architecture ΓÇô any node in the database can provide the exact same functionality as any other node ΓÇô contributing to CassandraΓÇÖs robustness and resilience. This article provides tips on how to optimize high availability and how to approach disaster recover.
+
+## Availability zones
+
+Cassandra's masterless architecture brings fault tolerance from the ground up, and Azure Managed Instance for Apache Cassandra provides support for [availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) in selected regions to enhance resiliency at the infrastructure level. Given a replication factor of 3, availability zone support ensures that each replica is in a different availability zone, thus preventing a zonal outage from impacting your database/application. We recommend enabling availability zones where possible.
+
+## Multi-region redundancy
+
+Cassandra's architecture, coupled with Azure availability zones support, gives you some level of fault tolerance and resiliency. However, it's important to consider the impact of regional outages for your applications. We highly recommend deploying [multi region clusters](create-multi-region-cluster.md) to safeguard against region level outages. Although they are rare, the potential impact is severe.
+
+For business continuity, it is not sufficient to only make the database multi-region. Other parts of your application also need to be deployed in the same manner either by being distributed, or with adequate mechanisms to fail over. If your users are spread across many geo locations, a multi-region data center deployment for your database has the added benefit of reducing latency, since all nodes in all data centers across the cluster can then serve both reads and writes from the region that is closest to them. However, if the application is configured to be "active-active", it's important to consider how [CAP theorem](https://cassandra.apache.org/doc/latest/cassandra/architecture/guarantees.html#what-is-cap) applies to the consistency of your data between replicas (nodes), and the trade-offs required to delivery high availability.
+
+In CAP theorem terms, Cassandra is by default an AP (Available Partition-tolerant) database system, with highly [tunable consistency](https://cassandra.apache.org/doc/4.1/cassandra/architecture/dynamo.html#tunable-consistency). For most use cases, we recommend using local_quorum for reads.
+
+- In active-passive for writes there's a trade-off between reliability and performance: for reliability we recommend QUORUM_EACH but for most users LOCAL_QUORUM or QUORUM is a good compromise. Note however that in the case of a regional outage, some writes might be lost in LOCAL_QUORUM.
+- In the case of an application being run in parallel QUORUM_EACH writes are preferred for most cases to ensure consistency between the two data centers.
+- If your goal is to favor consistency (lower RPO) rather than latency or availability (lower RTO), this should be reflected in your consistency settings and replication factor. As a rule of thumb, the number of quorum nodes required for a read plus the number of quorum nodes required for a write should be greater than the replication factor. For example, if you have a replication factor of 3, and quorum_one on reads (1 node), you should do quorum_all on writes (3 nodes), so that the total of 4 is greater than the replication factor of 3.
++
+## Replication
+
+We recommend auditing `keyspaces` and their replication settings from time to time to ensure the required replication between data centers has been configured. In the early stages of development, we recommend testing that everything works as expected by doing simple tests using `cqlsh`. For example, inserting a value while connected to one data center and reading it from the other.
+
+In particular, when setting up a second data center where an existing data center already has data, it's important to determine that all the data has been replicated and the system is ready. We recommend monitoring replication progress through our [DBA commands with `nodetool netstats`](dba-commands.md#how-to-run-a-nodetool-command). An alternate approach would be to count the rows in each table, but keep in mind that with big data sizes, due to the distributed nature of Cassandra, this can only give a rough estimate.
++
+## Balancing the cost of disaster recovery
+
+If your application is "active-passive", we still generally recommend that you deploy the same capacity in each region so that your application can fail over instantly to a "hot standby" data center in a secondary region. This ensures no performance degradation in the case of a regional outage. Most Cassandra [client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) provide options to initiate application level failover. By default, they assume regional outage means that the application is also down, in which case failover should happen at the load balancer level.
+
+However, to reduce the cost of provisioning a second data center, you may prefer to deploy a smaller SKU, and fewer nodes, in your secondary region. When an outage occurs, scaling up is made easier in Azure Managed Instance for Apache Cassandra by [turnkey vertical and horizontal scaling](create-cluster-portal.md#scale-a-datacenter). While your applications failover to your secondary region, you can manually [scale out](create-cluster-portal.md#horizontal-scale) and [scale up](create-cluster-portal.md#vertical-scale) the nodes in your secondary data center. In this case, your secondary data center acts as a lower cost warm standby. Taking this approach would need to be balanced against the time required to restore your system to full capacity in the event of an outage. It's important to test and practice what happens when a region is lost.
+
+ > [!NOTE]
+ > Scaling up nodes is much faster than scaling out. Keep this in mind when considering the balance between vertical and horizontal scale, and the number of nodes to deploy in your cluster.
+
+## Backup schedules
+
+Backups are automatic in Azure Managed Instance for Apache Cassandra, but you can pick your own schedule for the daily backups. We recommend choosing times with less load. Though backups are configured to only consume idle CPU, they can in some circumstances trigger [compactions](https://cassandra.apache.org/doc/latest/cassandra/operating/compaction/https://docsupdatetracker.net/index.html) in Cassandra, which can lead to an increase in CPU usage. Compactions can happen anytime with Cassandra, and depend on workload and chosen compaction strategy.
+
+ > [!IMPORTANT]
+ > The intention of backups is purely to mitigate accidental data loss or data corruption. We do **not** recommend backups as a disaster recovery strategy. Backups are not geo-redundant, and even if they were, it can take a very long time to recover a database from backups. Therefore, we strongly recommend a multi-region deployments, coupled with enabling availability zones where possible, to mitigate against disaster scenarios, and to be able to recover effectively from them. This is particularly important in the rare scenarios where the failed region cannot be covered, where without multi-region replication, all data may be lost.
+
+ :::image type="content" source="./media/resilient-applications/backup.png" alt-text="Screenshot of backup schedule configuration page." lightbox="./media/resilient-applications/backup.png" border="true":::
+
+## Next steps
+
+In this article, we laid out some best practices for building resilient applications with Cassandra.
+
+> [!div class="nextstepaction"]
+> [Create a cluster using Azure Portal](create-cluster-portal.md)
+++
mariadb Whats Happening To Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/whats-happening-to-mariadb.md
# What's happening to Azure Database for MariaDB? - Azure Database for MariaDB is on the retirement path, and **Azure Database for MariaDB is scheduled for retirement by September 19, 2025**. As part of this retirement, there is no extended support for creating new MariaDB server instances from the Azure portal beginning **December 19, 2023**, if you still need to create MariaDB instances to meet business continuity needs, you can use [Azure CLI](/azure/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli) until **March 19, 2024**.
mysql How To Networking Private Link Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-networking-private-link-azure-cli.md
az network private-endpoint create \
--subnet mySubnet \ --private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforMySQL/flexibleServers" --query "id" -o tsv) \ --group-id mysqlServer \
- --connection-name myConnection
+ --connection-name myConnection \
+ --location location
``` ### Configure the Private DNS Zone
private-5g-core Azure Private 5G Core Release Notes 2308 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2308.md
+
+ Title: Azure Private 5G Core 2308 release notes
+description: Discover what's new in the Azure Private 5G Core 2308 release
++++ Last updated : 09/21/2023++
+# Azure Private 5G Core 2308 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2308 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
+
+This article applies to the AP5GC 2308 release (2308.0-4). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2303 and ASE 2309 releases and is supported by the 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+
+For more details about compatibility, see [Packet core and Azure Stack Edge compatibility](azure-stack-edge-packet-core-compatibility.md).
+
+With this release, there's a new naming scheme and Packet Core versions are now called ΓÇÿ2308.0-1ΓÇÖ rather than ΓÇÿPMN-2308.ΓÇÖ
+
+> [!WARNING]
+> For this release, it's important that the packet core version is upgraded to the AP5GC 2308 release before the upgrading to the ASE 2309 release. Upgrading to ASE 2309 before upgrading to Packet Core 2308.0.1 causes a total system outage. Recovery requires you to delete and re-create the AKS cluster on your ASE.
+
+## Support lifetime
+
+Packet core versions are supported until two subsequent versions have been released (unless otherwise noted), which is typically two months after the release date. You should plan to upgrade your packet core in this time frame to avoid losing support.
+
+### Currently supported packet core versions
+The following table shows the support status for different Packet Core releases.
+
+| Release | Support Status |
+||-|
+| AP5GC 2308 | Supported until AP5GC 2311 released |
+| AP5GC 2307 | Supported until AP5GC 2310 released |
+| AP5GC 2306 and earlier | Out of Support |
+
+## What's new
+
+### 10 DNs
+In this release, the number of supported data networks (DNs) increases from three to 10, including with layer 2 traffic separation. If more than 6 DNs are required, a shared switch for access and core traffic is needed.
+
+To add a data network to your packet core, see [Modify a packet core instance](modify-packet-core.md).
+
+### Default MTU values
+In this release, the default MTU values are changed as follows:
+- UE MTU: 1440 (was 1300)
+- Access MTU: 1500 (was 1500)
+- Data MTU: 1440 (was 1500)
+
+Customers upgrading to 2308 see a change in the MTU values on their packet core.
+
+When the UE MTU is set to any valid value (see API Spec), then the other MTUs are set to:
+- Access MTU: UE MTU + 60
+- Data MTU: UE MTU
+
+Rollbacks to Packet Core versions earlier than 2308 are not possible if the UE MTU field is changed following an upgrade.
+
+To change the UE MTU signaled by the packet core, see [Modify a packet core instance](modify-packet-core.md).
+
+### MTU Interop setting
+In this release the MTU Interop setting is deprecated and can't be set for Packet Core versions 2308 and above.
+
+<!-- Removed as no issues fixed in the AP5GC2308 release>
+## Issues fixed in the AP5GC 2308 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue |
+ |--|--|--|
+ | 1 | | |
+<-->
+
+## Known issues in the AP5GC 2308 release
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Packet Forwarding | A slight(0.01%) increase in packet drops is observed in latest AP5GC release installed on ASE Platform Pro2 with ASE-2309 for throughput higher than 3.0 Gbps. | None |
+ | 2 | Local distributed tracing | In Multi PDN session establishment/Release call flows with different DNs, the distributed tracing web GUI fails to display some of 4G NAS messages (Activate/deactivate Default EPS Bearer Context Request) and some S1AP messages (ERAB request, ERAB Release). | None |
+ | 3 | Local distributed tracing | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory doesn't transmit via the web proxy. If there's a firewall blocking traffic that does not go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
+ | 4 | Packet Forwarding | In scenarios of sustained high load (for example, continuous setup of 100's of TCP flows per second) in 4G setups, AP5GC may encounter an internal error, leading to a short period of service disruption resulting in some call failures. | In most cases, the system will recover on its own and be able to handle new requests after a few seconds' disruption. For existing connections that are dropped the UEs need to re-establish the connection. |
++
+The following table provides a summary of known issues carried over from the previous releases.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Local Dashboards | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory doesn't transmit via the web proxy. If there's a firewall blocking traffic that doesn't go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
+
+
+## Next steps
+
+- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-5g-core Azure Stack Edge Packet Core Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md
Previously updated : 05/31/2023 Last updated : 09/07/2023 # Packet core and Azure Stack Edge (ASE) compatibility
The following table provides information on which versions of the ASE device are
| Packet core version | ASE Pro GPU compatible versions | ASE Pro 2 compatible versions | |--|--|--|
+| 2308 | 2303, 2309 | 2303, 2309 |
| 2307 | 2303 | 2303 | | 2306 | 2303 | 2303 | | 2305 | 2303 | 2303 |
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Last updated 02/07/2022
+zone_pivot_groups: ase-pro-version
# Collect the required information for a site
Collect all the values in the following table for the packet core instance that
Collect all the values in the following table to define the packet core instance's connection to the access network over the control plane and user plane interfaces. The field name displayed in the Azure portal will depend on the value you have chosen for **Technology type**, as described in [Collect packet core configuration values](#collect-packet-core-configuration-values). |Value |Field name in Azure portal | |||
- | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G) or **S1-MME address** (for 4G). |
- | The virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. | **ASE N2 virtual subnet** (for 5G) or **ASE S1-MME virtual subnet** (for 4G). |
- | The virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | **ASE N3 virtual subnet** (for 5G) or **ASE S1-U virtual subnet** (for 4G). |
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G) or **S1-MME address** (for 4G). |
+ | The virtual network name on port 5 on your Azure Stack Edge Pro GPU corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. | **ASE N2 virtual subnet** (for 5G) or **ASE S1-MME virtual subnet** (for 4G). |
+ | The virtual network name on port 5 on your Azure Stack Edge Pro GPU corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | **ASE N3 virtual subnet** (for 5G) or **ASE S1-U virtual subnet** (for 4G). |
+ |Value |Field name in Azure portal |
+ |||
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G) or **S1-MME address** (for 4G). |
+ | The virtual network name on port 3 on your Azure Stack Edge Pro 2 corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. | **ASE N2 virtual subnet** (for 5G) or **ASE S1-MME virtual subnet** (for 4G). |
+ | The virtual network name on port 3 on your Azure Stack Edge Pro 2 corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | **ASE N3 virtual subnet** (for 5G) or **ASE S1-U virtual subnet** (for 4G). |
## Collect data network values
You can configure up to ten data networks per site. During site creation, you'll
For each data network that you want to configure, collect all the values in the following table. These values define the packet core instance's connection to the data network over the user plane interface, so you need to collect them whether you're creating the data network or using an existing one.
+ |Value |Field name in Azure portal |
+ |||
+ | The name of the data network. This could be an existing data network or a new one you'll create during packet core configuration. |**Data network name**|
+ | The virtual network name on port 6 (or port 5 if you plan to have more than six data networks) on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G). |
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-subnets-and-ip-addresses). </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
+ |Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. </br></br>When NAPT is disabled, static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network must be configured in the data network router. </br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**|
|Value |Field name in Azure portal | ||| | The name of the data network. This could be an existing data network or a new one you'll create during packet core configuration. |**Data network name**|
- | The virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G). |
- | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
- | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
- | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
+ | The virtual network name on port 4 (or port 3 if you plan to have more than six data networks) on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G). |
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-subnets-and-ip-addresses). </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. </br></br>When NAPT is disabled, static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network must be configured in the data network router. </br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| ## Collect values for diagnostics package gathering
If you want to provide a custom HTTPS certificate at site creation, follow the s
Use the information you've collected to create the site:
- - [Create a site - Azure portal](create-a-site.md)
- - [Create a site - ARM template](create-site-arm-template.md)
+- [Create a site - Azure portal](create-a-site.md)
+- [Create a site - ARM template](create-site-arm-template.md)
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
You can input all the settings on this page before selecting **Apply** at the bo
> [!IMPORTANT] > If you are using port 3 for data networks, we recommend that it is used for the lowest expected load. 1. Select **Add virtual network** and fill in the side panel:
- - **Virtual switch**: select **vswitch-port3** for N2, N3 and up to four DNs, and select **vswitch-port4** for up to six DNs.
- - **Name**: *N2*, *N3*, or *N6-DNX* (where *X* is the DN number 1-10).
- - **VLAN**: 0
- - **Subnet mask** and **Gateway**: Use the correct subnet mask and gateway for the IP address configured on the ASE port (even if the gateway is not set on the ASE port itself).
+ - **Virtual switch**: select **vswitch-port3** for N2, N3 and up to four DNs, and select **vswitch-port4** for up to six DNs.
+ - **Name**: *N2*, *N3*, or *N6-DNX* (where *X* is the DN number 1-10).
+ - **VLAN**: 0
+ - **Subnet mask** and **Gateway**: Use the correct subnet mask and gateway for the IP address configured on the ASE port (even if the gateway is not set on the ASE port itself).
- For example, *255.255.255.0* and *10.232.44.1* - If the subnet does not have a default gateway, use another IP address in the subnet which will respond to ARP requests (such as one of the RAN IP addresses). If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway. - **DNS server** and **DNS suffix** should be left blank.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
Allocate the following IP addresses for each data network in the site:
- Default gateway. - One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.* - The following IP addresses must be used by all the data networks in the site: - One IP address for all data networks on port 3 on the Azure Stack Edge Pro 2 device. - One IP address for all data networks on port 4 on the Azure Stack Edge Pro 2 device.- :::zone-end- :::zone pivot="ase-pro-gpu"
-The following IP addresses must be used by all the data networks in the site:
- - One IP address for all data networks on port 5 on the Azure Stack Edge Pro GPU device. - One IP address for all data networks on port 6 on the Azure Stack Edge Pro GPU device.- :::zone-end ### VLANs
For each site you're deploying, do the following.
### Configure ports for local access :::zone pivot="ase-pro-2"- The following tables contain the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling. You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-pro-2-system-requirements#networking-port-requirements).
You must set these up in addition to the [ports required for Azure Stack Edge (A
| SCTP 38412 Inbound | Port 3 (Access network) | Control plane access signaling (N2 interface). </br>Only required for 5G deployments. | | SCTP 36412 Inbound | Port 3 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. | | UDP 2152 In/Outbound | Port 3 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). |
-| All IP traffic | Ports 3 and 4 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). |
+| All IP traffic | Ports 3 and 4 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). </br> Only required on port 3 if data networks are configured on that port. |
:::zone-end- :::zone pivot="ase-pro-gpu"-
-The following tables contain the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
+The following tables contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-gpu-system-requirements#networking-port-requirements).
You must set these up in addition to the [ports required for Azure Stack Edge (A
| SCTP 38412 Inbound | Port 5 (Access network) | Control plane access signaling (N2 interface). </br>Only required for 5G deployments. | | SCTP 36412 Inbound | Port 5 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. | | UDP 2152 In/Outbound | Port 5 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). |
-| All IP traffic | Ports 5 and 6 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). |
+| All IP traffic | Ports 5 and 6 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). </br> Only required on port 5 if data networks are configured on that port. |
:::zone-end #### Port requirements for Azure Stack Edge
This command queries the custom location and will output an OID string. Save th
Do the following for each site you want to add to your private mobile network. Detailed instructions for how to carry out each step are included in the **Detailed instructions** column where applicable. :::zone pivot="ase-pro-2"- | Step No. | Description | Detailed instructions | |--|--|--| | 1. | Complete the Azure Stack Edge Pro 2 deployment checklist.| [Deployment checklist for your Azure Stack Edge Pro 2 device](/azure/databox-online/azure-stack-edge-pro-2-deploy-checklist?pivots=single-node)| | 2. | Order and prepare your Azure Stack Edge Pro 2 device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-prep.md) |
-| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 2 - management</br>- Port 3 - access network (and optionally, data networks)</br>- Port 4 - data networks| [Tutorial: Install Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-install?pivots=single-node.md) |
+| 3. | Rack and cable your Azure Stack Edge Pro 2 device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 2 - management</br>- Port 3 - access network (and optionally, data networks)</br>- Port 4 - data networks| [Tutorial: Install Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-install?pivots=single-node.md) |
| 4. | Connect to your Azure Stack Edge Pro 2 device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-connect?pivots=single-node.md) | | 5. | Configure the network for your Azure Stack Edge Pro 2 device. </br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data.</br></br> In addition, you can optionally configure your Azure Stack Edge Pro device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)| | 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md) |
Do the following for each site you want to add to your private mobile network. D
> You must ensure your Azure Stack Edge Pro 2 device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro 2 device, see [Update your Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later). :::zone-end- :::zone pivot="ase-pro-gpu"- | Step No. | Description | Detailed instructions | |--|--|--| | 1. | Complete the Azure Stack Edge Pro GPU deployment checklist.| [Deployment checklist for your Azure Stack Edge Pro GPU device](/azure/databox-online/azure-stack-edge-gpu-deploy-checklist?pivots=single-node)| | 2. | Order and prepare your Azure Stack Edge Pro GPU device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md) |
-| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network (and optionally, data networks)</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install?pivots=single-node.md) |
-| 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro device.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data.</br></br> In addition, you can optionally configure your Azure Stack Edge Pro device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
+| 3. | Rack and cable your Azure Stack Edge Pro GPU device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network (and optionally, data networks)</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install?pivots=single-node.md) |
+| 4. | Connect to your Azure Stack Edge Pro GPU device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect?pivots=single-node.md) |
+| 5. | Configure the network for your Azure Stack Edge Pro GPU device.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data.</br></br> In addition, you can optionally configure your Azure Stack Edge Pro GPU device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro GPU device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) | | 7. | Configure certificates for your Azure Stack Edge Pro GPU device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates?pivots=single-node) | | 8. | Activate your Azure Stack Edge Pro GPU device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
Do the following for each site you want to add to your private mobile network. D
| 10. | Run the diagnostics tests for the Azure Stack Edge Pro GPU device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) | > [!IMPORTANT]
-> You must ensure your Azure Stack Edge Pro GPU device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro device, see [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
+> You must ensure your Azure Stack Edge Pro GPU device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro GPU device, see [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
:::zone-end
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Last updated 01/27/2022
-zone_pivot_groups: ap5gc-portal-powershell
+zone_pivot_groups: ase-pro-version
# Create a site using the Azure portal
In this step, you'll create the mobile network site resource representing the ph
> If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource. - Ensure **AKS-HCI** is selected in the **Platform** field.
-1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section.
+7. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section.
+ > [!NOTE]
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro GPU device.
+
+8. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-gpu#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site will support 5G UEs) or **ASE SGi virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network name on port 5 or 6 on your Azure Stack Edge Pro device.
+ - If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox.
+ - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network.
+
+ :::image type="content" source="media/create-a-site/create-site-attach-data-network.png" alt-text="Screenshot of the Azure portal showing the Attach data network screen.":::
+
+ Once you've finished filling out the fields, select **Attach**.
+7. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section.
> [!NOTE]
- > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro 2 device.
-1. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
- - **ASE N6 virtual subnet** (if this site will support 5G UEs) or **ASE SGi virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
+8. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-2#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site will support 5G UEs) or **ASE SGi virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network name on port 3 or 4 on your Azure Stack Edge Pro device.
- If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox. - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network. :::image type="content" source="media/create-a-site/create-site-attach-data-network.png" alt-text="Screenshot of the Azure portal showing the Attach data network screen."::: Once you've finished filling out the fields, select **Attach**.
-1. Repeat the previous step for each additional data network you want to configure.
-1. If you decided you want to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
+9. Repeat the previous step for each additional data network you want to configure.
+10. If you decided you want to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
If you decided not to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificates for this site, you can skip this step. 1. Select **+ Add** to configure a user assigned managed identity. 1. In the **Select Managed Identity** side panel: - Select the **Subscription** from the dropdown. - Select the **Managed identity** from the dropdown.
-1. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate at this stage, you can skip this step.
+11. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate at this stage, you can skip this step.
1. Under **Provide custom HTTPS certificate?**, select **Yes**. 1. Use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-1. In the **Local access** section, set the fields as follows:
+12. In the **Local access** section, set the fields as follows:
:::image type="content" source="media/create-a-site/create-site-local-access-tab.png" alt-text="Screenshot of the Azure portal showing the Local access configuration tab for a site resource."::: - Under **Authentication type**, select the authentication method you decided to use in [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools). - Under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-1. Select **Review + create**.
-1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+13. Select **Review + create**.
+14. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
:::image type="content" source="media/create-a-site/create-site-validation.png" alt-text="Screenshot of the Azure portal showing successful validation of configuration values for a site resource."::: If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-1. Once your configuration has been validated, you can select **Create** to create the site. The Azure portal will display the following confirmation screen when the site has been created.
+15. Once your configuration has been validated, you can select **Create** to create the site. The Azure portal will display the following confirmation screen when the site has been created.
:::image type="content" source="media/site-deployment-complete.png" alt-text="Screenshot of the Azure portal showing the confirmation of a successful deployment of a site.":::
-1. Select **Go to resource group**, and confirm that it contains the following new resources:
+16. Select **Go to resource group**, and confirm that it contains the following new resources:
- A **Mobile Network Site** resource representing the site as a whole. - A **Packet Core Control Plane** resource representing the control plane function of the packet core instance in the site.
If you decided not to configure diagnostics packet collection or use a user assi
:::image type="content" source="media/create-a-site/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/create-a-site/site-related-resources.png":::
-1. If you want to assign additional packet cores to the site, for each new packet core resource see [Create additional Packet Core instances for a site using the Azure portal](create-additional-packet-core.md).
+17. If you want to assign additional packet cores to the site, for each new packet core resource see [Create additional Packet Core instances for a site using the Azure portal](create-additional-packet-core.md).
## Next steps
private-5g-core Create Additional Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-additional-packet-core.md
Last updated 03/21/2023
+zone_pivot_groups: ase-pro-version
# Create additional Packet Core instances for a site using the Azure portal
In this step, you'll create an additional packet core instance for a site in you
> If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the packet core resource. - Ensure **AKS-HCI** is selected in the **Platform** field.
-1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
+9. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
> [!NOTE]
- > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro GPU device.
-1. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+9. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
+ > [!NOTE]
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro 2 device.
+
+10. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
- **ASE N6 virtual subnet** (if this site supports 5G UEs) or **ASE SGi virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device. - If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox. - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network. Once you've finished filling out the fields, select **Attach**.
-1. Repeat the previous step for each additional data network configured on the site.
-1. If you decided to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
+11. Repeat the previous step for each additional data network configured on the site.
+12. If you decided to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
If you decided not to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificates for this site, you can skip this step. 1. Select **+ Add** to configure a user assigned managed identity. 1. In the **Select Managed Identity** side panel: - Select the **Subscription** from the dropdown. - Select the **Managed identity** from the dropdown.
-1. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate for monitoring this site, you can skip this step.
+13. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate for monitoring this site, you can skip this step.
1. Under **Provide custom HTTPS certificate?**, select **Yes**. 1. Use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-1. In the **Local access** section, set the fields as follows:
+14. In the **Local access** section, set the fields as follows:
- Under **Authentication type**, select the authentication method you decided to use in [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools). - Under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-1. Select **Review + create**.
-1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+15. Select **Review + create**.
+16. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-1. Once your configuration has been validated, you can select **Create** to create the packet core instance. The Azure portal will display a confirmation screen when the packet core instance has been created.
+17. Once your configuration has been validated, you can select **Create** to create the packet core instance. The Azure portal will display a confirmation screen when the packet core instance has been created.
-1. Return to the **Site** overview, and confirm that it contains the new packet core instance.
+18. Return to the **Site** overview, and confirm that it contains the new packet core instance.
## Next steps
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Last updated 03/16/2022
+zone_pivot_groups: ase-pro-version
# Create a site using an ARM template
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites -- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) for your new site.
+- Carry out the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu) for your new site.
- Identify the names of the interfaces corresponding to ports 5 and 6 on your Azure Stack Edge Pro device.-- Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md).
+- Identify the names of the interfaces corresponding to ports 3 and 4 on your Azure Stack Edge Pro device.
+- Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md?pivots=ase-pro-2).
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. - If the new site will support 4G user equipment (UEs), you must have [created a network slice](create-manage-network-slices.md#create-a-network-slice) with slice/service type (SST) value of 1 and an empty slice differentiator (SD).
Four Azure resources are defined in the template.
[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-new-site%2Fazuredeploy.json) 2. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites). | Field | Value |
Four Azure resources are defined in the template.
| **Existing Data Network Name** | Enter the name of the data network. This value must match the name you used when creating the data network. | | **Site Name** | Enter a name for your site.| | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
- | **Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ | **Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro GPU device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
| **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
- | **User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
- | **User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ | **User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
+ | **User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
|**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support static IP address allocation. | | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
Four Azure resources are defined in the template.
| **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. | | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
+2. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
+
+ | Field | Value |
+ |--|--|
+ | **Subscription** | Select the Azure subscription you used to create your private mobile network. |
+ | **Resource group** | Select the resource group containing the mobile network resource representing your private mobile network. |
+ | **Region** | Select the region in which you deployed the private mobile network. |
+ | **Location** | Enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. |
+ | **Existing Mobile Network Name** | Enter the name of the mobile network resource representing your private mobile network. |
+ | **Existing Data Network Name** | Enter the name of the data network. This value must match the name you used when creating the data network. |
+ | **Site Name** | Enter a name for your site.|
+ | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
+ | **Control Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro 2 device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ | **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
+ | **User Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
+ | **User Plane Data Interface Name** | Enter the virtual network name on port 4 on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
+ |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
+ | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
+ | **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. |
+ | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. |
+ | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
++ 3. Select **Review + create**. 4. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
tags: azure-resource-manager
+zone_pivot_groups: ase-pro-version
Last updated 03/23/2022
The following Azure resources are defined in the template.
[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-full-5gc-deployment%2Fazuredeploy.json)
-1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
-
+2. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
|Field |Value | |||
The following Azure resources are defined in the template.
| **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. | |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
-2. Select **Review + create**.
-3. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+2. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
+
+ |Field |Value |
+ |||
+ |**Subscription** | Select the Azure subscription you want to use to create your private mobile network. |
+ |**Resource group** | Create a new resource group. |
+ |**Region** | Select the region in which you're deploying the private mobile network. |
+ |**Location** | Leave this field unchanged. |
+ |**Mobile Network Name** | Enter a name for the private mobile network. |
+ |**Mobile Country Code** | Enter the mobile country code for the private mobile network. |
+ |**Mobile Network Code** | Enter the mobile network code for the private mobile network. |
+ |**Site Name** | Enter a name for your site. |
+ |**Service Name** | Leave this field unchanged. |
+ |**Sim Policy Name** | Leave this field unchanged. |
+ |**Slice Name** | Leave this field unchanged. |
+ |**Sim Group Name** | If you want to provision SIMs, enter the name of the SIM group to which the SIMs will be added. Otherwise, leave this field blank. |
+ |**Sim Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. |
+ | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
+ |**Control Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ |**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network.</br> Note: Please ensure that the N2 IP address specified here matches the N2 address configured on the ASE Portal. |
+ |**User Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
+ |**User Plane Data Interface Name** | Enter the virtual network name on port 4 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
+ |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
+ |**Data Network Name** | Enter the name of the data network. |
+ |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
+ |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.|
+ | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. |
+ |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
++
+3. Select **Review + create**.
+4. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-4. Once your configuration has been validated, you can select **Create** to deploy the resources. The Azure portal will display a confirmation screen when the deployment is complete.
+5. Once your configuration has been validated, you can select **Create** to deploy the resources. The Azure portal will display a confirmation screen when the deployment is complete.
## Review deployed resources
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
Last updated 03/30/2023
+zone_pivot_groups: ase-pro-version
# Private mobile network design requirements
This section outlines some decisions you should consider when designing your net
#### Design considerations
-When deployed on Azure Stack Edge (ASE), AP5GC uses physical port 5 for access signaling and data (5G N2 and N3 reference points/4G S1 and S1-U reference points) and port 6 for core data (5G N6/4G SGi reference points).
+When deployed on Azure Stack Edge Pro GPU (ASE), AP5GC uses physical port 5 for access signaling and data (5G N2 and N3 reference points/4G S1 and S1-U reference points) and port 6 for core data (5G N6/4G SGi reference points). If more than six data networks are configured, port 5 is also used for core data.
AP5GC supports deployments with or without layer 3 routers on ports 5 and 6. This is useful for avoiding extra hardware at smaller edge sites. - It is possible to connect ASE port 5 to RAN nodes directly (back-to-back) or via a layer 2 switch. When using this topology, you must configure the eNodeB/gNodeB address as the default gateway on the ASE network interface. - Similarly, it is possible to connect ASE port 6 to your core network via a layer 2 switch. When using this topology, you must set up an application or an arbitrary address on the subnet as gateway on the ASE side. - Alternatively, you can combine these approaches. For example, you could use a router on ASE port 6 with a flat layer 2 network on ASE port 5. If a layer 3 router is present in the local network, you must configure it to match the ASE's configuration.
+When deployed on Azure Stack Edge 2 (ASE 2), AP5GC uses physical port 3 for access signaling and data (5G N2 and N3 reference points/4G S1 and S1-U reference points) and port 4 for core data (5G N6/4G SGi reference points). If more than six data networks are configured, port 3 is also used for core data.
+
+AP5GC supports deployments with or without layer 3 routers on ports 3 and 4. This is useful for avoiding extra hardware at smaller edge sites.
+
+- It is possible to connect ASE port 3 to RAN nodes directly (back-to-back) or via a layer 2 switch. When using this topology, you must configure the eNodeB/gNodeB address as the default gateway on the ASE network interface.
+- Similarly, it is possible to connect ASE port 4 to your core network via a layer 2 switch. When using this topology, you must set up an application or an arbitrary address on the subnet as gateway on the ASE side.
+- Alternatively, you can combine these approaches. For example, you could use a router on ASE port 4 with a flat layer 2 network on ASE port 3. If a layer 3 router is present in the local network, you must configure it to match the ASE's configuration.
Unless your packet core has Network Address Translation (NAT) enabled, a local layer 3 network device must be configured with static routes to the UE IP pools via the appropriate N6 IP address for the corresponding attached data network. #### Sample network topologies There are multiple ways to set up your network for use with AP5GC. The exact setup varies depending on your needs and hardware. This section provides some sample network topologies on ASE Pro GPU hardware. - Layer 3 network with N6 Network Address Translation (NAT)
There are multiple ways to set up your network for use with AP5GC. The exact set
- For example, you could configure separate VLANs for management, access and data traffic, or a separate VLAN for each attached data network. - VLANs must be configured on the local layer 2 or layer 3 network equipment. Multiple VLANs will be carried on a single link from ASE port 5 (access network) and/or 6 (core network), so you must configure each of those links as a VLAN trunk. :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-vlans.png" alt-text="Diagram of layer 3 network topology with V L A N s." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-vlans.png":::+
+- Layer 3 network with 7-10 data networks
- If you want to deploy more than six VLAN-separated data networks, the additional (up to four) data networks must be deployed on ASE port 5. This requires one shared switch or router that carries both access and core traffic. VLAN tags can be assigned as required to N2, N3 and each of the N6 data networks.
+ - No more than six data networks can be configured on the same port.
+ - For optimal performance, the data networks with the highest expected load should be configured on port 6.
:::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-additional-dns.png" alt-text="Diagram of layer 3 network topology with 10 data networks." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-vlans.png":::
+There are multiple ways to set up your network for use with AP5GC. The exact setup varies depending on your needs and hardware. This section provides some sample network topologies on ASE Pro 2 hardware.
+
+- Layer 3 network with N6 Network Address Translation (NAT)
+ This network topology has your ASE connected to a layer 2 device that provides connectivity to the mobile network core and access gateways (routers connecting your ASE to your data and access networks respectively). This topology supports up to six data networks. This solution is commonly used because it simplifies layer 3 routing.
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-n6-nat.png" alt-text="Diagram of a layer 3 network with N6 Network Address Translation (N A T)." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-n6-nat.png":::
+
+- Layer 3 network without Network Address Translation (NAT)
+ This network topology is a similar solution, but UE IP address ranges must be configured as static routes in the data network router with the N6 NAT IP address as the next hop address. As with the the previous solution, this topology supports up to six data networks.
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-without-n6-nat.png" alt-text="Diagram of a layer 3 network without Network Address Translation (N A T)." lightbox="media/private-mobile-network-design-requirements/layer-3-network-without-n6-nat.png":::
+
+- Flat layer 2 network
+ The packet core does not require layer 3 routers or any router-like functionality. An alternative topology could forgo the use of layer 3 gateway routers entirely and instead construct a layer 2 network in which the ASE is in the same subnet as the data and access networks. This network topology can be a cheaper alternative when you donΓÇÖt require layer 3 routing. This requires Network Address Port Translation (NAPT) to be enabled on the packet core.
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-2-network.png" alt-text="Diagram of a layer 2 network." lightbox="media/private-mobile-network-design-requirements/layer-2-network.png":::
+
+- Layer 3 network with multiple data networks
+ - AP5GC can support up to ten attached data networks, each with its own configuration for Domain Name System (DNS), UE IP address pools, N6 IP configuration, and NAT. The operator can provision UEs as subscribed in one or more data networks and apply data network-specific policy and quality of service (QoS) configuration.
+ - This topology requires that the N6 interface is split into one subnet for each data network or one subnet for all data networks. This option therefore requires careful planning and configuration to prevent overlapping data network IP ranges or UE IP ranges.
+
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-multiple-dns-azure-stack-edge-2.png" alt-text="Diagram of layer 3 network topology with multiple data networks." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-multiple-dns-azure-stack-edge-2.png":::
+
+- Layer 3 network with VLAN and physical access/core separation
+ - You can also separate ASE traffic into VLANs, whether or not you choose to add layer 3 gateways to your network. There are multiple benefits to segmenting traffic into separate VLANs, including more flexible network management and increased security.
+ - For example, you could configure separate VLANs for management, access and data traffic, or a separate VLAN for each attached data network.
+ - VLANs must be configured on the local layer 2 or layer 3 network equipment. Multiple VLANs will be carried on a single link from ASE port 3 (access network) and/or 4 (core network), so you must configure each of those links as a VLAN trunk.
+
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-vlans-azure-stack-edge-2.png" alt-text="Diagram of layer 3 network topology with V L A N s." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-vlans-azure-stack-edge-2.png":::
+
+- Layer 3 network with 7-10 data networks
+ - If you want to deploy more than six VLAN-separated data networks, the additional (up to four) data networks must be deployed on ASE port 3. This requires one shared switch or router that carries both access and core traffic. VLAN tags can be assigned as required to N2, N3 and each of the N6 data networks.
+ - No more than six data networks can be configured on the same port.
+ - For optimal performance, the data networks with the highest expected load should be configured on port 4.
+
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-additional-dns-azure-stack-edge-2.png" alt-text="Diagram of layer 3 network topology with 10 data networks." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-vlans-azure-stack-edge-2.png":::
### Subnets and IP addresses
private-5g-core Support Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/support-lifetime.md
Previously updated : 06/19/2023 Last updated : 09/21/2023
The following table shows the support status for different Packet Core releases.
| Release | Support Status | ||-|
-| AP5GC 2307 | Supported until AP5GC 2309 released |
-| AP5GC 2306 | Supported until AP5GC 2308 released |
-| AP5GC 2305 and earlier | Out of Support |
+| AP5GC 2308 | Supported until AP5GC 2311 released |
+| AP5GC 2307 | Supported until AP5GC 2310 released |
+| AP5GC 2306 and earlier | Out of Support |
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
Previously updated : 08/10/2023 Last updated : 09/21/2023 # What's new in Azure Private 5G Core?
To help you stay up to date with the latest developments, this article covers:
This page is updated regularly with the latest developments in Azure Private 5G Core.
+## September 2023
+### Packet core 2308
+
+**Type:** New release
+
+**Date available:** September 21, 2023
+
+The 2308 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2308 release notes](azure-private-5g-core-release-notes-2308.md).
+
+### 10 DNs
+
+**Type:** New feature
+
+**Date available:** September 07, 2023
+
+In this release, the number of supported data networks (DNs) increases from three to ten, including with layer 2 traffic separation. If more than 6 DNs are required, a shared switch for access and core traffic is needed.
+
+### Default MTU values
+
+**Type:** New feature
+
+**Date available:** September 07, 2023
+
+In this release, the default MTU values are changed as follows:
+- UE MTU: 1440 (was 1300)
+- Access MTU: 1500 (was 1500)
+- Data MTU: 1440 (was 1500)
+
+Customers upgrading to 2308 see a change in the MTU values on their packet core.
+
+When the UE MTU is set to any valid value (see API Spec) then the other MTUs will be set to:
+- Access MTU: UE MTU + 60
+- Data MTU: UE MTU
+
+Rollbacks to Packet Core versions earlier than 2308 are not possible if the UE MTU field is changed following an upgrade.
+
+### MTU Interop setting
+
+**Type:** New feature
+
+**Date available:** September 07, 2023
+
+In this release, the MTU Interop setting is deprecated and cannot be set for Packet Core versions 2308 and above.
+ ## July 2023 ### Packet core 2307
This page is updated regularly with the latest developments in Azure Private 5G
The 2307 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2307 release notes](azure-private-5g-core-release-notes-2307.md).
+### UE usage tracking
+
+**Type:** New feature
+
+**Date available:** July 31, 2023
+
+The UE usage tracking messages in Azure Event Hubs are now encoded in AVRO file container format, which enables you to consume these events via Power BI or Azure Stream Analytics (ASA). If you want to enable this feature for your deployment, contact your support representative.
+
+### Unknown User cause code mapping in 4G deployments
+
+**Type:** New feature
+
+**Date available:** July 31, 2023
+
+In this release, the 4G NAS EMM cause code for ΓÇ£unknown userΓÇ¥ (subscriber not provisioned on AP5GC) changes to ΓÇ£no-suitable-cells-in-ta-15ΓÇ¥ by default. This provides better interworking in scenarios where a single PLMN is used for multiple, independent mobile networks.
### 2023-06-01 API **Type:** New release
reliability Availability Zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-overview.md
Title: What are Azure regions and availability zones?
-description: Learn about regions and availability zones and how they work to help you achieve reliability
+ Title: What are Azure availability zones?
+description: Learn about availability zones and how they work to help you achieve reliability
Previously updated : 10/25/2022 Last updated : 09/20/2023
-# What are Azure regions and availability zones?
+# What are availability zones?
-Azure regions and availability zones are designed to help you achieve reliability for your business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions ensures customers are supported across the world.
+Many Azure regions provide *availability zones*, which are separated groups of datacenters within a region. Availability zones are close enough to have low-latency connections to other availability zones. They're connected by a high-performance network with a round-trip latency of less than 2ms. However, availability zones are far enough apart to reduce the likelihood that more than one will be affected by local outages or weather. Availability zones have independent power, cooling, and networking infrastructure. They're designed so that if one zone experiences an outage, then regional services, capacity, and high availability are supported by the remaining zones. They help your data stay synchronized and accessible when things go wrong.
-## Regions
+Datacenter locations are selected by using rigorous vulnerability risk assessment criteria. This process identifies all significant datacenter-specific risks and considers shared risks between availability zones.
+
+The following diagram shows several example Azure regions. Regions 1 and 2 support availability zones.
-Each Azure region features datacenters deployed within a latency-defined perimeter. They're connected through a dedicated regional low-latency network. This design ensures that Azure services within any region offer the best possible performance and security.
To see which regions support availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
-## Availability zones
+## Zonal and zone-redundant services
-Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
+When you deploy into an Azure region that contains availability zones, you can use multiple availability zones together. By using multiple availability zones, you can keep separate copies of your application and data within separate physical datacenters in a large metropolitan area.
-Azure availability zones are connected by a high-performance network with a round-trip latency of less than 2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+There are two ways that Azure services use availability zones:
-![Image showing physically separate availability zone locations within an Azure region.](media/availability-zones.png)
+- **Zonal** resources are pinned to a specific availability zone. You can combine multiple zonal deployments across different zones to meet high reliability requirements. You're responsible for managing data replication and distributing requests across zones. If an outage occurs in a single availability zone, you're responsible for failover to another availability zone.
-Datacenter locations are selected by using rigorous vulnerability risk assessment criteria. This process identifies all significant datacenter-specific risks and considers shared risks between availability zones.
+- **Zone-redundant** resources are spread across multiple availability zones. Microsoft manages spreading requests across zones and the replication of data across zones. If an outage occurs in a single availability zone, Microsoft manages failover automatically.
+
+Azure services support one or both of these approaches. Platform as a service (PaaS) services typically support zone-redundant deployments. Infrastructure as a service (IaaS) services typically support zonal deployments. For more information about how Azure services work with availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+
+For information on service-specific reliability support using availability zones as well as recommended disaster recovery guidance see [Reliability guidance overview](./reliability-guidance-overview.md).
++
+## Physical and logical availability zones
+
+Each datacenter is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription, and different subscriptions might have a different mapping order. Azure subscriptions are automatically assigned their mapping at the time the subscription is created.
+
+To understand the mapping between logical and physical zones for your subscription, use the [List Locations Azure Resource Manager API](/rest/api/resources/subscriptions/list-locations). You can use the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/what-is-azure-powershell) to retrieve the information from the API.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az rest --method get --uri '/subscriptions/{subscriptionId}/locations?api-version=2022-12-01' --query 'value'
+```
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$subscriptionId = (Get-AzContext).Subscription.ID
+$response = Invoke-AzRestMethod -Method GET -Path "/subscriptions/$subscriptionId/locations?api-version=2022-12-01"
+$locations = ($response.Content | ConvertFrom-Json).value
+```
++
+## Availability zones and Azure updates
+
+Microsoft aims to deploy updates to Azure services to a single availability zone at a time. This approach reduces the impact that updates might have on an active workload, because the workload can continue to run in other zones while the update is in process. You need to run your workload across multiple zones to take advantage of this benefit. For more information about how Azure deploys updates, see [Advancing safe deployment practices](https://azure.microsoft.com/blog/advancing-safe-deployment-practices/).
++
+## Paired and unpaired regions
+
+Many regions also have a [*paired region*](./cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). Paired regions support certain types of multi-region deployment approaches. Some newer regions have [multiple availability zones and don't have a paired region](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair). You can still deploy multi-region solutions into these regions, but the approaches you use might be different.
-With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
+## Shared responsibility model
-Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. You can use the dedicated ARM API called: [checkZonePeers](/rest/api/resources/subscriptions/check-zone-peers) to compare zone mapping for resilient solutions that span across multiple subscriptions.
+The [shared responsibility model](./overview.md#shared-responsibility) describes how responsibilities are divided between the cloud provider (Microsoft) and you. Depending on the type of services you use, you might take on more or less responsibility for operating the service.
-You can design resilient solutions by using Azure services that use availability zones. Co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones.
+Microsoft provides availability zones and regions to give you flexibility in how you design your solution to meet your requirements. When you use managed services, Microsoft takes on more of the management responsibilities for your resources, which might even include data replication, failover, failback, and other tasks related to operating a distributed system.
-Azure Services that support availability zones are designed to provide the right level of resiliency and flexibility for their resources. The resources can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal (zone aligned to a specific zone). You can combine these approaches across different resources.
+## Availability zone architectural guidance
-Some organizations require high availability of availability zones and protection from large-scale phenomena and regional disasters. Azure regions are designed to offer protection against localized disasters with availability zones and protection from regional or large geography disasters with disaster recovery, by making use of another region. To learn more about business continuity, disaster recovery, and cross-region replication, see [Cross-region replication in Azure](cross-region-replication-azure.md).
+To achieve more reliable workloads:
-![Image showing availability zones that protect against localized disasters and regional or large geography disasters by using another region.](media/availability-zones-region-geography.png)
+- Production workloads should be configured to use availability zones if the region they are in supports availability zones.
+- For mission-critical workloads, you should consider a solution that is *both* multi-region and multi-zone.
-To see which services support availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+For more detailed information on how to use regions and availability zones in a solution architecture, see [Recommendations for using availability zones and regions](/azure/well-architected/resiliency/regions-availability-zones).
## Next steps
-> [!div class="nextstepaction"]
-> [Azure services and regions with availability zones](availability-zones-service-support.md)
+- [Azure services and regions with availability zones](availability-zones-service-support.md)
-> [!div class="nextstepaction"]
-> [Availability zone migration guidance](availability-zones-migration-overview.md)
+- [Availability zone migration guidance](availability-zones-migration-overview.md)
-> [!div class="nextstepaction"]
-> [Availability of service by category](availability-service-by-category.md)
+- [Availability of service by category](availability-service-by-category.md)
-> [!div class="nextstepaction"]
-> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
+- [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
-> [!div class="nextstepaction"]
-> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
+- [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
reliability Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/glossary.md
To better understand regions and availability zones in Azure, it helps to understand key terms or concepts.
-| Term or concept | Description |
-| | |
-| region | A set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. |
-| geography | An area of the world that contains at least one Azure region. Geographies define a discrete market that preserves data-residency and compliance boundaries. Geographies allow customers with specific data-residency and compliance needs to keep their data and applications close. Geographies are fault tolerant to withstand complete region failure through their connection to our dedicated high-capacity networking infrastructure. |
-| availability zone | Unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. |
-| recommended region | A region that provides the broadest range of service capabilities and is designed to support availability zones now, or in the future. These regions are designated in the Azure portal as **Recommended**. |
-| alternate (other) region | A region that extends Azure's footprint within a data-residency boundary where a recommended region also exists. Alternate regions help to optimize latency and provide a second region for disaster recovery needs. They aren't designed to support availability zones, although Azure conducts regular assessment of these regions to determine if they should become recommended regions. These regions are designated in the Azure portal as **Other**. |
-| cross-region replication (formerly paired region) | A reliability strategy and implementation that combines high availability of availability zones with protection from region-wide incidents to meet both disaster recovery and business continuity needs. |
-| foundational service | A core Azure service that's available in all regions when the region is generally available. |
-| mainstream service | An Azure service that's available in all recommended regions within 90 days of the region general availability or demand-driven availability in alternate regions. |
-| strategic service | An Azure service that's demand-driven availability across regions backed by customized/specialized hardware. |
-| regional service | An Azure service that's deployed regionally and enables the customer to specify the region into which the service will be deployed. For a complete list, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all). |
-| non-regional service | An Azure service for which there's no dependency on a specific Azure region. Non-regional services are deployed to two or more regions. If there's a regional failure, the instance of the service in another region continues servicing customers. For a complete list, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all). |
-| zonal service | An Azure service that supports availability zones, and that enables a resource to be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. |
-| zone-redundant service | An Azure service that supports availability zones, and that enables resources to be replicated or distributed across zones automatically. |
-| always-available service | An Azure service that supports availability zones, and that enables resources to be always available across all Azure geographies as well as resilient to zone-wide and region-wide outages. |
+
+| Term | Definition |
+|-|-|
+| Region | A geographic perimeter that contains a set of datacenters. |
+| Datacenter | A facility that contains servers, networking equipment, and other hardware to support Azure resources and workloads. |
+| Availability zone | [A separated group of datacenters within a region.][availability-zones-overview] Each availability zone is independent of the others, with its own power, cooling, and networking infrastructure. [Many regions support availability zones.][azure-regions-with-availability-zone-support] |
+| Paired regions |A relationship between two Azure regions. [Some Azure regions][azure-region-pairs] are connected to another defined region to enable specific types of multi-region solutions. [Newer Azure regions aren't paired.][regions-with-availability-zones-and-no-region-pair] |
+| Region architecture | The specific configuration of the Azure region, including the number of availability zones and whether the region is paired with another region. |
+| Locally redundant deployment | A deployment model in which a resource is deployed into a single region without reference to an availability zone. In a region that supports availability zones, the resource might be deployed in any of the region's availability zones. |
+| Zonal (pinned) deployment | A deployment model in which a resource is deployed into a specific availability zone. |
+| Zone-redundant deployment | A deployment model in which a resource is deployed across multiple availability zones. Microsoft manages data synchronization, traffic distribution, and failover if a zone experiences an outage. |
+| Multi-region deployment| A deployment model in which resources are deployed into multiple Azure regions. |
+| Asynchronous replication | A data replication approach in which data is written and committed to one location. At a later time, the changes are replicated to another location. |
+| Synchronous replication | A data replication approach in which data is written and committed to multiple locations. Each location must acknowledge completion of the write operation before the overall write operation is considered complete. |
+| Active-active | An architecture in which multiple instances of a solution actively process requests at the same time. |
+| Active-passive | An architecture in which one instance of a solution is designated as the *primary* and processes traffic, and one or more *secondary* instances are deployed to serve traffic if the primary is unavailable. |
reliability Migrate Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-managed-instance.md
>[!IMPORTANT] >Zone redundancy for SQL Managed Instance is currently in Preview. To learn which regions support SQL Instance zone redundancy, see [Services support by region](availability-zones-service-support.md).
-SQL Managed Instance offers a zone redundant configuration that uses [Azure availability zones](availability-zones-overview.md#availability-zones) to replicate your instances across multiple physical locations within an Azure region. With zone redundancy enabled, your Business Critical managed instances become resilient to a larger set of failures, such as catastrophic datacenter outages, without any changes to application logic. For more information on the availability model for SQL Database, see [Business Critical service tier zone redundant availability section in the Azure SQL documentation](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#premium-and-business-critical-service-tier-zone-redundant-availability).
+SQL Managed Instance offers a zone redundant configuration that uses [Azure availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) to replicate your instances across multiple physical locations within an Azure region. With zone redundancy enabled, your Business Critical managed instances become resilient to a larger set of failures, such as catastrophic datacenter outages, without any changes to application logic. For more information on the availability model for SQL Database, see [Business Critical service tier zone redundant availability section in the Azure SQL documentation](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#premium-and-business-critical-service-tier-zone-redundant-availability).
This guide describes how to migrate SQL Managed Instances that use Business Critical service tier from non-availability zone support to availability zone support. Once the zone redundant option is enabled, Azure SQL Managed Instance automatically reconfigures the instance.
reliability Reliability Azure Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md
This article describes reliability support in Azure Container Apps, and covers b
[!INCLUDE [next step](includes/reliability-availability-zone-description-include.md)]
-Azure Container Apps uses [availability zones](availability-zones-overview.md#availability-zones) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
+Azure Container Apps uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
By enabling Container Apps' zone redundancy feature, replicas are automatically distributed across the zones in the region. Traffic is load balanced among the replicas. If a zone outage occurs, traffic is automatically routed to the replicas in the remaining zones.
By enabling Container Apps' zone redundancy feature, replicas are automatically
Azure Container Apps offers the same reliability support regardless of your plan type.
-Azure Container Apps uses [availability zones](availability-zones-overview.md#availability-zones) in regions where they're available. For a list of regions that support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md).
+Azure Container Apps uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available. For a list of regions that support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md).
### SLA improvements
reliability Reliability Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md
Last updated 06/12/2023
This article contains [specific reliability recommendations](#reliability-recommendations) and information on [availability zones support](#availability-zone-support) for Virtual Machine Scale Sets. >[!NOTE]
->Virtual Machine Scale Sets can only be deployed into one region. If you want to deploy VMs across multiple regions, see [Virtual Machines-Disaster recovery: cross-region failover](./reliability-virtual-machines.md#disaster-recovery-and-business-continuity).
+>Virtual Machine Scale Sets can only be deployed into one region. If you want to deploy VMs across multiple regions, see [Virtual Machines-Disaster recovery: cross-region failover](./reliability-virtual-machines.md#cross-region-disaster-recovery-and-business-continuity).
For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). ## Reliability recommendations
-This section contains recommendations for achieving resiliency and availability for your Azure Virtual Machine Scale Sets.
[!INCLUDE [Reliability recommendations](includes/reliability-recommendations-include.md)]
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
Last updated 07/18/2023
# Reliability in Virtual Machines
-This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [disaster recovery and business continuity](#disaster-recovery-and-business-continuity).
+This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity).
For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). ## Reliability recommendations
-This section contains recommendations for achieving resiliency and availability for your Azure Virtual Machines.
[!INCLUDE [Reliability recommendations](includes/reliability-recommendations-include.md)]
Before you upgrade your next set of nodes in another zone, you should perform th
To learn how to migrate a VM to availability zone support, see [Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support](./migrate-vm.md).
-## Disaster recovery and business continuity
+## Cross-region disaster recovery and business continuity
In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
For deploying virtual machines, you can use [flexible orchestration](../virtual-
## Next steps > [!div class="nextstepaction"]
-> [Resiliency in Azure](/azure/reliability/availability-zones-overview)
+> [Reliability in Azure](/azure/reliability/availability-zones-overview)
search Resource Demo Sites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-demo-sites.md
Microsoft built and hosts the following demos.
| [Chat with your data](https://entgptsearch.azurewebsites.net/) | An Azure web app that uses ChatGPT in Azure OpenAI with fictitious health plan data in a search index. | [https://github.com/Azure-Samples/azure-search-openai-demo/](https://github.com/Azure-Samples/azure-search-openai-demo/) | | [NYC Jobs demo](https://azjobsdemo.azurewebsites.net/) | An ASP.NET app with facets, filters, details, geo-search (map controls). | [https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs](https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs) | | [JFK files demo](https://jfk-demo-2019.azurewebsites.net/#/) | An ASP.NET web app built on a public data set, transformed with custom and predefined skills to extract searchable content from scanned document (JPEG) files. [Learn more...](https://www.microsoft.com/ai/ai-lab-jfk-files) | [https://github.com/Microsoft/AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) |
-| [Semantic search for retail](https://brave-meadow-0f59c9b1e.1.azurestaticapps.net/) | Web app for a fictitious online retailer, "Terra" | Not available |
+| [Semantic ranking for retail](https://brave-meadow-0f59c9b1e.1.azurestaticapps.net/) | Web app for a fictitious online retailer, "Terra" | Not available |
| [Wolters Kluwer demo search app](https://wolterskluwereap.azurewebsites.net/) | Financial files demo that uses custom skills and forms recognition to make fictitious business documents searchable. | Not available |
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
In a non-RAG pattern, queries make a round trip from a search client. The query
In a RAG pattern, queries and responses are coordinated between the search engine and the LLM. A user's question or query is forwarded to both the search engine and to the LLM as a prompt. The search results come back from the search engine and are redirected to an LLM. The response that makes it back to the user is generative AI, either a summation or answer from the LLM.
-There's no query type in Cognitive Search - not even semantic search or vector search - that composes new answers. Only the LLM provides generative AI. Here are the capabilities in Cognitive Search that are used to formulate queries:
+There's no query type in Cognitive Search - not even semantic or vector search - that composes new answers. Only the LLM provides generative AI. Here are the capabilities in Cognitive Search that are used to formulate queries:
| Query feature | Purpose | Why use it | |||| | [Simple or full Lucene syntax](search-query-create.md) | Query execution over text and non-vector numeric content | Full text search is best for exact matches, rather than similar matches. Full text search queries are ranked using the [BM25 algorithm](index-similarity-and-scoring.md) and support relevance tuning through scoring profiles. It also supports filters and facets. | | [Filters](search-filters.md) and [facets](search-faceted-navigation.md) | Applies to text or numeric (non-vector) fields only. Reduces the search surface area based on inclusion or exclusion criteria. | Adds precision to your queries. |
-| [Semantic search](semantic-how-to-query-request.md) | Re-ranks a BM25 result set using semantic models. Produces short-form captions and answers that are useful as LLM inputs. | Easier than scoring profiles, and depending on your content, a more reliable technique for relevance tuning. |
+| [Semantic ranking](semantic-how-to-query-request.md) | Re-ranks a BM25 result set using semantic models. Produces short-form captions and answers that are useful as LLM inputs. | Easier than scoring profiles, and depending on your content, a more reliable technique for relevance tuning. |
[Vector search](vector-search-how-to-query.md) | Query execution over vector fields for similarity search, where the query string is one or more vectors. | Vectors can represent all types of content, in any language. | | [Hybrid search](vector-search-ranking.md#hybrid-search) | Combines any or all of the above query techniques. Vector and non-vector queries execute in parallel and are returned in a unified result set. | The most significant gains in precision and recall are through hybrid queries. |
if len(history) > 0:
else: search = user_input
-# Alternatively simply use search_client.search(q, top=3) if not using semantic search
+# Alternatively simply use search_client.search(q, top=3) if not using semantic ranking
print("Searching:", search) print("-") filter = "category ne '{}'".format(exclude_category.replace("'", "''")) if exclude_category else None
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Preview features that transition to general availability are removed from this l
| [**MySQL indexer data source**](search-howto-index-mysql.md) | Indexer data source | Index content and metadata from Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. | | [**Azure Cosmos DB indexer: Azure Cosmos DB for MongoDB, Azure Cosmos DB for Apache Gremlin**](search-howto-index-cosmosdb.md) | Indexer data source | For Azure Cosmos DB, SQL API is generally available, but Azure Cosmos DB for MongoDB and Azure Cosmos DB for Apache Gremlin are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexer data source | The Azure Blob Storage indexer in Azure Cognitive Search recognizes blobs that are in a soft deleted state, and remove the corresponding search document during indexing. | Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
-| [**Semantic search**](semantic-search-overview.md) | Relevance (scoring) | Semantic ranking of results, captions, and answers. | Configure semantic search using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). |
+| [**Semantic search**](semantic-search-overview.md) | Relevance (scoring) | Semantic ranking of results, captions, and answers. | Configure semantic ranking using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). |
| [**speller**](cognitive-search-aml-skill.md) | Query | Optional spelling correction on query term inputs for simple, full, and semantic queries. | [Search Preview REST API](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). | | [**Normalizers**](search-normalizers.md) | Query | Normalizers provide simple text preprocessing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| Use [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview.| | [**featuresMode parameter**](/rest/api/searchservice/preview-api/search-documents#query-parameters) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | Add this query parameter using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. |
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
The Management REST API is available in stable and preview versions. Be sure to
> * [Create or update a service](#create-or-update-a-service) > * [Enable Azure role-based access control for data plane](#enable-rbac) > * [(preview) Enforce a customer-managed key policy](#enforce-cmk)
-> * [(preview) Disable semantic search](#disable-semantic-search)
+> * [(preview) Disable semantic ranking](#disable-semantic-search)
> * [(preview) Disable workloads that push data to external resources](#disable-external-access) All of the Management REST APIs have examples. If a task isn't covered in this article, see the [API reference](/rest/api/searchmanagement/) instead.
PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegrou
<a name="disable-semantic-search"></a>
-## (preview) Disable semantic search
+## (preview) Disable semantic ranking
Although [semantic search isn't enabled](semantic-how-to-enable-disable.md) by default, you could lock down the feature at the service level.
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Count won't be affected by routine maintenance or other workloads on the search
## Paging results
-By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search. Otherwise, the top 50 are an arbitrary order for exact match queries (where uniform "@searchScore=1.0" indicates arbitrary ranking).
+By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic. Otherwise, the top 50 are an arbitrary order for exact match queries (where uniform "@searchScore=1.0" indicates arbitrary ranking).
To control the paging of all documents returned in a result set, add `$top` and `$skip` parameters to the GET query request, or `top` and `skip` to the POST query request. The following list explains the logic.
search Search Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-overview.md
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
Parameters used during query execution include:
-+ **`queryType`** sets the parser: `simple`, `full`. The [default simple query parser](search-query-simple-examples.md) is optimal for full text search. The [full Lucene query parser](search-query-lucene-examples.md) is for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. This parameter can also be set to `semantic` for [semantic search](semantic-search-overview.md) for advanced semantic modeling on the query response.
++ **`queryType`** sets the parser: `simple`, `full`. The [default simple query parser](search-query-simple-examples.md) is optimal for full text search. The [full Lucene query parser](search-query-lucene-examples.md) is for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. This parameter can also be set to `semantic` for [semantic ranking](semantic-search-overview.md) for advanced semantic modeling on the query response. + **`searchMode`** specifies whether matches are based on "all" criteria (favors precision) or "any" criteria (favors recall) in the expression. The default is "any".
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md
When you create or use Search resources, you're charged for the following meters
+ The charge is applied per the number of search units (SU) allocated to the service. Search units are [units of capacity](search-capacity-planning.md). Total SU is the product of replicas and partitions (R x P = SU) used by your service.
-Billing is based on capacity (SUs) and the costs of running premium features, such as [AI enrichment](cognitive-search-concept-intro.md), [Semantic search](semantic-search-overview.md), and [Private endpoints](service-create-private-endpoint.md). Meters associated with premium features are listed in the following table.
+Billing is based on capacity (SUs) and the costs of running premium features, such as [AI enrichment](cognitive-search-concept-intro.md), [Semantic ranking](semantic-search-overview.md), and [Private endpoints](service-create-private-endpoint.md). Meters associated with premium features are listed in the following table.
| Meter | Unit | |-|| | Image extraction (AI enrichment) <sup>1, 2</sup> | Per 1000 images. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). | | Custom Entity Lookup skill (AI enrichment) <sup>1</sup> | Per 1000 text records. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing) | | [Built-in skills](cognitive-search-predefined-skills.md) (AI enrichment) <sup>1</sup> | Number of transactions, billed at the same rate as if you had performed the task by calling Azure AI services directly. You can process 20 documents per indexer per day for free. Larger or more frequent workloads require a multi-resource Azure AI services key. |
-| [Semantic search](semantic-search-overview.md) <sup>1</sup> | Number of queries of "queryType=semantic", billed at a progressive rate. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). |
+| [Semantic ranking](semantic-search-overview.md) <sup>1</sup> | Number of queries of "queryType=semantic", billed at a progressive rate. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). |
| [Shared private link](search-indexer-howto-access-private.md) <sup>1</sup> | [Billed for bandwidth](https://azure.microsoft.com/pricing/details/private-link/) as long as the shared private link exists and is used. | <sup>1</sup> Applies only if you use or enable the feature.
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md
Most features are available on all tiers, including the free tier. In a few case
| [IP firewall access](service-configure-firewall.md) | Not available on the Free tier. | | [Private endpoint (integration with Azure Private Link)](service-create-private-endpoint.md) | For inbound connections to a search service, not available on the Free tier. For outbound connections by indexers to other Azure resources, not available on Free or S3 HD. For indexers that use skillsets, not available on Free, Basic, S1, or S3 HD.| | [Availability Zones](search-reliability.md) | Not available on the Free or Basic tier. |
-| [Semantic search (preview)](semantic-search-overview.md) | Not available on the Free tier. |
+| [Semantic ranking (preview)](semantic-search-overview.md) | Not available on the Free tier. |
Resource-intensive features might not work well unless you give it sufficient capacity. For example, [AI enrichment](cognitive-search-concept-intro.md) has long-running skills that time out on a Free service unless the dataset is small.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Title: Configure semantic search
+ Title: Configure semantic ranking
-description: Set a semantic query type to attach the deep learning models of semantic search.
+description: Set a semantic query type to attach the deep learning models of semantic ranking.
search Speller How To Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/speller-how-to-add.md
Last updated 03/28/2023
You can improve recall by spell-correcting individual search query terms before they reach the search engine. The **speller** parameter is supported for all query types: [simple](query-simple-syntax.md), [full](query-lucene-syntax.md), and the [semantic](semantic-how-to-query-request.md) option currently in public preview.
-Speller was released in tandem with the [semantic search preview](semantic-search-overview.md) and shares the "queryLanguage" parameter, but is otherwise an independent feature with its own prerequisites. There's no sign-up or extra charges for using this feature.
+Speller was released in tandem with the [semantic ranking](semantic-search-overview.md) and shares the "queryLanguage" parameter, but is otherwise an independent feature with its own prerequisites. There's no sign-up or extra charges for using this feature.
## Prerequisites
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
} ```
-## Spell correction with semantic search
+## Spell correction with semantic ranking
This query, with typos in every term except one, undergoes spelling corrections to return relevant results. To learn more, see [Configure semantic ranking](semantic-how-to-query-request.md).
While content in a search index can be composed in multiple languages, the query
+ [Create a basic query](search-query-create.md) + [Use full Lucene query syntax](query-Lucene-syntax.md) + [Use simple query syntax](query-simple-syntax.md)
-+ [Semantic search overview](semantic-search-overview.md)
++ [Semantic ranking](semantic-search-overview.md)
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
The following services and tools are required for this tutorial.
## Download files
-Source code for this tutorial is in the [optimzize-data-indexing/v11](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing/v11) folder in the [Azure-Samples/azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) GitHub repository.
+Source code for this tutorial is in the [optimize-data-indexing/v11](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing/v11) folder in the [Azure-Samples/azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) GitHub repository.
## Key considerations
search Tutorial Python Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-search-query-integration.md
Previously updated : 07/18/2023 Last updated : 09/21/2023 ms.devlang: python
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
In Azure Cognitive Search, if you added vector fields to a search index, this ar
> [!div class="checklist"] > + [Query vector fields](#query-syntax-for-vector-search).
+> + [Filter and query vector fields](#filter-and-vector-queries)
> + [Combine vector, full text search, and semantic search in a hybrid query](#query-syntax-for-hybrid-search). > + [Query multiple vector fields at once](#query-syntax-for-vector-query-over-multiple-fields). > + [Run multiple vector queries in parallel](#query-syntax-for-multiple-vector-queries).
In this vector query, which is shortened for brevity, the "value" contains the v
In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query request targets the "contentVector" field. The actual vector has 1536 embeddings. It's trimmed in this example for readability. ```http
-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
Content-Type: application/json api-key: {{admin-api-key}} {
Here's a modified example so that you can see the basic structure of a response
+## Filter and vector queries
+
+A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can attribute a text or numeric field in the same index as "filterable".
+
+In contrast with full text search, a filter in a pure vector query is effectively processed as a post-query operation. The set of `"k"` nearest neighbors is retrieved, and then combined with the set of filtered results. As such, the value of `"k"` predetermines the surface over which the filter is applied. For `"k": 10`, the filter is applied to 10 most similar documents. For `"k": 100`, the filter iterates over 100 documents (assuming the index contains 100 documents that are sufficiently similar to the query).
+
+Here's an example of filter expressions combined with a vector query:
+
+```http
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
+Content-Type: application/json
+api-key: {{admin-api-key}}
+{
+ "vectors": [
+ {
+ "value": [
+ -0.009154141,
+ 0.018708462,
+ . . .
+ -0.02178128,
+ -0.00086512347
+ ],
+ "fields": "contentVector",
+ "k": 10
+ },
+ ],
+ "select": "title, content, category",
+ "filter": "category eq 'Databases'"
+}
+```
+
+> [!TIP]
+> If you don't have source fields with text or numeric values, check for document metadata, such as LastModified or CreatedBy properties, that might be useful in a filter.
+ ## Query syntax for hybrid search A hybrid query combines full text search and vector search, where the `"search"` parameter takes a query string and `"vectors.value"` takes the vector query. The search engine runs full text and vector queries in parallel. All matches are evaluated for relevance using Reciprocal Rank Fusion (RRF) and a single result set is returned in the response.
-Hybrid queries are useful because they add support for filters, orderby, and [semantic search](semantic-how-to-query-request.md) For example, in addition to the vector query, you could filter by location or search over product names or titles, scenarios for which similarity search isn't a good fit.
+Hybrid queries are useful because they add support for filters, orderby, and [semantic search](semantic-how-to-query-request.md) For example, in addition to the vector query, you could search over people or product names or titles, scenarios for which similarity search isn't a good fit.
The following example is from the [Postman collection of REST APIs](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) that demonstrate query configurations. It shows a complete request that includes vector search, full text search with filters, and semantic search with captions and answers. Semantic search is an optional premium feature. It's not required for vector search or hybrid search. For content that includes rich descriptive text *and* vectors, it's possible to benefit from all of the search modalities in one request. ```http
-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
Content-Type: application/json api-key: {{admin-api-key}} {
api-key: {{admin-api-key}}
You can set the "vectors.fields" property to multiple vector fields. For example, the Postman collection has vector fields named "titleVector" and "contentVector". A single vector query executes over both the "titleVector" and "contentVector" fields, which must have the same embedding space since they share the same query vector. ```http
-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
Content-Type: application/json api-key: {{admin-api-key}} {
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Previously updated : 08/10/2023 Last updated : 09/21/2023 # Vector search within Azure Cognitive Search
On the indexing side, prepare source documents that contain embeddings. Cognitiv
On the query side, in your client application, collect the query input. Add a step that converts the input into a vector, and then send the vector query to your index on Cognitive Search for a similarity search. Cognitive Search returns documents with the requested `k` nearest neighbors (kNN) in the results.
-You can index vector data as fields in documents alongside alphanumeric content. Vector queries can be issued singly or in combination with other query types, including term queries (hybrid search) and filters and semantic re-ranking in the same search request.
+You can index vector data as fields in documents alongside alphanumeric content. Vector queries can be issued singly or in combination with filters and other query types, including term queries (hybrid search) and semantic ranking in the same search request.
## Limitations
Scenarios for vector search include:
+ **Multi-lingual search**. Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in.
-+ **Hybrid search**. Vector search is implemented at the field level, which means you can build queries that include vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing.
++ **Hybrid search**. Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing.
-+ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine processes the filter first, reducing the surface area of the search corpus before running the vector query.
++ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine processes the filter after the vector query executes, trimming search results from query response. + **Vector database**. Use Cognitive Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. For example, you can use Azure Cognitive Search as a [*vector index* in an Azure Machine Learning prompt flow](/azure/machine-learning/concept-vector-stores) for Retrieval Augmented Generation (RAG) applications.
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
The following resources are also available:
- <a href="/azure/service-fabric/service-fabric-versions" target="blank">Supported Versions</a> - <a href="https://azure.microsoft.com/resources/samples/?service=service-fabric&sort=0" target="blank">Code Samples</a>
+## Service Fabric 10.0
+
+We're excited to announce that the 10.0 release of the Service Fabric runtime has started rolling out to the various Azure regions along with tooling and SDK updates. The updates for .NET SDK, Java SDK, and Service Fabric runtimes can be downloaded from the links provided in Release Notes. The SDK, NuGet packages, and Maven repositories are available in all regions within 7-10 days.
+
+### Key announcements
+- Enhance Container image pruning.
+- Balancing of a cluster per node type.
+- Expose health check phase and timer for application and cluster upgrade.
+- Support ESE.dll version compatibility in the replica building process.
+- Enable Lease probes.
+- Extend the FabricClient constructor to include "SecurityCredentials" without "HostEndpoints".
+- Security audit of cluster management endpoint settings.
+
+### Service Fabric 10.0 releases
+| September 09, 2023 | Azure Service Fabric 10.0 Release | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_10.md) |
+ ## Service Fabric 9.1 We're excited to announce that the 9.1 release of the Service Fabric runtime has started rolling out to the various Azure regions along with tooling and SDK updates. The updates for .NET SDK, Java SDK, and Service Fabric runtimes can be downloaded from the links provided in Release Notes. The SDK, NuGet packages, and Maven repositories are available in all regions within 7-10 days.
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Previously updated : 07/24/2023 Last updated : 09/22/2023
Each inventory rule generates a set of files in the specified inventory destinat
Each inventory run for a rule generates the following files: -- **Inventory file:** An inventory run for a rule generates multiple CSV or Apache Parquet formatted files. Each such file contains matched objects and their metadata.
+- **Inventory file:** An inventory run for a rule generates a CSV or Apache Parquet formatted file. Each such file contains matched objects and their metadata.
> [!IMPORTANT]
- > Until September 8, 2023, runs can produce a singe inventory file in cases where the matched object count is small. After September 8, 2023, all runs will produce multiple files regardless of the matched object count. To learn more, see [Multiple inventory file output FAQ](storage-blob-faq.yml#multiple-inventory-file-output).
+ > Starting in October 2023, inventory runs will produce multiple files if the object count is large. To learn more, see [Multiple inventory file output FAQ](storage-blob-faq.yml#multiple-inventory-file-output).
Reports in the Apache Parquet format present dates in the following format: `timestamp_millis [number of milliseconds since 1970-01-01 00:00:00 UTC`]. For a CSV formatted file, the first row is always the schema row. The following image shows an inventory CSV file opened in Microsoft Excel.
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
Previously updated : 09/19/2022 Last updated : 09/20/2023
All blob access tiers support immutable storage. You can change the access tier
### Redundancy configurations
-All redundancy configurations support immutable storage. For geo-redundant configurations, customer-managed failover is not supported. For more information about redundancy configurations, see [Azure Storage redundancy](../common/storage-redundancy.md).
+All redundancy configurations support immutable storage. For more information about redundancy configurations, see [Azure Storage redundancy](../common/storage-redundancy.md).
### Hierarchical namespace support
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
Last updated 08/18/2023 - # Known issues with Network File System (NFS) 3.0 protocol support for Azure Blob Storage
The following NFS 3.0 features aren't yet supported.
## NFS 3.0 clients
-Windows client for NFS is not yet supported
+Windows client for NFS is not yet supported.
## Blob Storage features
Files and directories that you create in an NFS share always inherit the group I
- [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md) - [Mount Blob storage by using the Network File System (NFS) 3.0 protocol](network-file-system-protocol-support-how-to.md)+
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Previously updated : 05/31/2023 Last updated : 09/21/2023
Point-in-time restore for block blobs has the following limitations and known is
- If an immutability policy is configured, then a restore operation can be initiated, but any blobs that are protected by the immutability policy won't be modified. A restore operation in this case won't result in the restoration of a consistent state to the date and time given. - A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation. - If a blob with an active lease is included in the range to restore, and if the current version of the leased blob is different from the previous version at the timestamp provided for PITR, the restore operation fails atomically. We recommend breaking any active leases before initiating the restore operation.-- Performing a customer-managed failover on a storage account resets the earliest possible restore point for the storage account. For more details, see [Point-in-time restore](../common/storage-disaster-recovery-guidance.md#point-in-time-restore).
+- Performing a customer-managed failover on a storage account resets the earliest possible restore point for the storage account. For more details, see [Point-in-time restore](../common/storage-disaster-recovery-guidance.md#point-in-time-restore-inconsistencies).
- Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - Point-in-time restore isn't supported for hierarchical namespaces or operations via Azure Data Lake Storage Gen2. - Point-in-time restore isn't supported when the storage account's **AllowedCopyScope** property is set to restrict copy scope to the same Azure AD tenant or virtual network. For more information, see [About Permitted scope for copy operations (preview)](../common/security-restrict-copy-operations.md?toc=/azure/storage/blobs/toc.json&tabs=portal#about-permitted-scope-for-copy-operations-preview).
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
description: Learn about change feed logs in Azure Blob Storage and how to use t
Previously updated : 05/30/2023 Last updated : 09/06/2023
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
You can configure values in [ParallelTransferOptions](/java/api/com.azure.storag
- `blockSize`: The maximum block size to transfer for each request. You can set this value by using the [setBlockSizeLong](/java/api/com.azure.storage.common.paralleltransferoptions#com-azure-storage-common-paralleltransferoptions-setblocksizelong(java-lang-long)) method. - `maxConcurrency`: The maximum number of parallel requests issued at any given time as a part of a single parallel transfer. You can set this value by using the [setMaxConcurrency](/java/api/com.azure.storage.common.paralleltransferoptions#com-azure-storage-common-paralleltransferoptions-setmaxconcurrency(java-lang-integer)) method.
-Add the following `import` directive to your file to use `ParallelTransferOptions`:
+Add the following `import` directive to your file to use `ParallelTransferOptions` for a download:
```java import com.azure.storage.common.*;
The following code example shows how to set values for `ParallelTransferOptions`
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java" id="Snippet_DownloadBlobWithTransferOptions":::
+To learn more about tuning data transfer options, see [Performance tuning for uploads and downloads with Java](storage-blobs-tune-upload-download-java.md).
+ ## Resources To learn more about how to download blobs using the Azure Blob Storage client library for Java, see the following resources.
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
You can define client library configuration options when uploading a blob. These
### Specify data transfer options on upload
-You can configure values in [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) to improve performance for data transfer operations. The following table lists the methods you can use to set these options, along with a description:
+You can configure values in [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) to improve performance for data transfer operations. The following values can be tuned for uploads based on the needs of your app:
-| Method | Description |
-| | |
-| [`setBlockSizeLong(Long blockSize)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setblocksizelong(java-lang-long)) | Sets the block size to transfer for each request. For uploads, the parameter `blockSize` is the size of each block that's staged. This value also determines the number of requests that need to be made. If `blockSize` is large, the upload makes fewer network calls, but each individual call sends more data. |
-| [`setMaxConcurrency(Integer maxConcurrency)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxconcurrency(java-lang-integer)) | The parameter `maxConcurrency` is the maximum number of parallel requests that are issued at any given time as a part of a single parallel transfer. This value applies per API call. |
-| [`setMaxSingleUploadSizeLong(Long maxSingleUploadSize)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxsingleuploadsizelong(java-lang-long)) | If the size of the data is less than or equal to this value, it's uploaded in a single put rather than broken up into chunks. If the data is uploaded in a single shot, the block size is ignored. |
+- `blockSize`: The maximum block size to transfer for each request. You can set this value by using the [setBlockSizeLong](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setblocksizelong(java-lang-long)) method.
+- `maxSingleUploadSize`: If the size of the data is less than or equal to this value, it's uploaded in a single put rather than broken up into chunks. If the data is uploaded in a single shot, the block size is ignored. You can set this value by using the [setMaxSingleUploadSizeLong](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxsingleuploadsizelong(java-lang-long)) method.
+- `maxConcurrency`: The maximum number of parallel requests issued at any given time as a part of a single parallel transfer. You can set this value by using the [setMaxConcurrency](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxconcurrency(java-lang-integer)) method.
+
+Make sure you have the following `import` directive to use `ParallelTransferOptions` for an upload:
+
+```java
+import com.azure.storage.blob.models.*;
+```
The following code example shows how to set values for [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) and include the options as part of a [BlobUploadFromFileOptions](/java/api/com.azure.storage.blob.options.blobuploadfromfileoptions) instance. The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app. :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java" id="Snippet_UploadBlobWithTransferOptions":::
+To learn more about tuning data transfer options, see [Performance tuning for uploads and downloads with Java](storage-blobs-tune-upload-download-java.md).
+ ### Upload a block blob with index tags Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data.
storage Storage Blobs Tune Upload Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-java.md
+
+ Title: Performance tuning for uploads and downloads with Azure Storage client library for Java
+
+description: Learn how to tune your uploads and downloads for better performance with Azure Storage client library for Java.
+++++ Last updated : 09/22/2023
+ms.devlang: java
+++
+# Performance tuning for uploads and downloads with Java
+
+When an application transfers data using the Azure Storage client library for Java, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app runs in.
+
+This article walks through several considerations for tuning data transfer options. When properly tuned, the client library can efficiently distribute data across multiple requests, which can result in improved operation speed, memory usage, and network stability.
+
+## Performance tuning for uploads
+
+Properly tuning data transfer options is key to reliable performance for uploads. Storage transfers are partitioned into several subtransfers based on the values of these arguments. The maximum supported transfer size varies by operation and service version, so be sure to check the documentation to determine the limits. For more information on transfer size limits for Blob storage, see [Scale targets for Blob storage](scalability-targets.md#scale-targets-for-blob-storage).
+
+### Set transfer options for uploads
+
+You can configure the values in [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) to improve performance for data transfer operations. The following values can be tuned for uploads based on the needs of your app:
+
+- [maxSingleUploadSize](#maxsingleuploadsize): The maximum blob size in bytes for a single request upload.
+- [blockSize](#blocksize): The maximum block size to transfer for each request.
+- [maxConcurrency](#maxconcurrency): The maximum number of parallel requests issued at any given time as a part of a single parallel transfer.
+
+> [!NOTE]
+> The client libraries will use defaults for each data transfer option, if not provided. These defaults are typically performant in a data center environment, but not likely to be suitable for home consumer environments. Poorly tuned data transfer options can result in excessively long operations and even request timeouts. It's best to be proactive in testing these values, and tuning them based on the needs of your application and environment.
+
+#### maxSingleUploadSize
+
+The `maxSingleUploadSize` value is the maximum blob size in bytes for a single request upload. This value can be set using the following method:
+
+- [`setMaxSingleUploadSizeLong(Long maxSingleUploadSize)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxsingleuploadsizelong(java-lang-long))
+
+If the size of the data is less than or equal to `maxSingleUploadSize`, the blob is uploaded with a single [Put Blob](/rest/api/storageservices/put-blob) request. If the blob size is greater than `maxSingleUploadSize`, or if the blob size is unknown, the blob is uploaded in chunks using a series of [Put Block](/rest/api/storageservices/put-block) calls followed by [Put Block List](/rest/api/storageservices/put-block-list).
+
+It's important to note that the value you specify for `blockSize` *does not* limit the value that you define for `maxSingleUploadSize`. The `maxSingleUploadSize` argument defines a separate size limitation for a request to perform the entire operation at once, with no subtransfers. It's often the case that you want `maxSingleUploadSize` to be *at least* as large as the value you define for `blockSize`, if not larger. Depending on the size of the data transfer, this approach can be more performant, as the transfer is completed with a single request and avoids the overhead of multiple requests.
+
+If you're unsure of what value is best for your situation, a safe option is to set `maxSingleUploadSize` to the same value used for `blockSize`.
+
+#### blockSize
+
+The `blockSize` value is the maximum length of a transfer in bytes when uploading a block blob in chunks. This value can be set using the following method:
+
+- [`setBlockSizeLong(Long blockSize)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setblocksizelong(java-lang-long))
+
+The `blockSize` value is the maximum length of a transfer in bytes when uploading a block blob in chunks. As mentioned earlier, this value *does not* limit `maxSingleUploadSize`, which can be larger than `blockSize`.
+
+To keep data moving efficiently, the client libraries may not always reach the `blockSize` value for every transfer. Depending on the operation, the maximum supported value for transfer size can vary. For more information on transfer size limits for Blob storage, see the chart in [Scale targets for Blob storage](scalability-targets.md#scale-targets-for-blob-storage).
+
+#### maxConcurrency
+
+The `maxConcurrency` value is the maximum number of parallel requests issued at any given time as a part of a single parallel transfer. This value can be set using the following method:
+
+- [`setMaxConcurrency(Integer maxConcurrency)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxconcurrency(java-lang-integer))
+
+#### Code example
+
+Make sure you have the following `import` directive to use `ParallelTransferOptions` for an upload:
+
+```java
+import com.azure.storage.blob.models.*;
+```
+
+The following code example shows how to set values for [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) and include the options as part of a [BlobUploadFromFileOptions](/java/api/com.azure.storage.blob.options.blobuploadfromfileoptions) instance. If you're not uploading from a file, you can set similar options using [BlobParallelUploadOptions](/java/api/com.azure.storage.blob.options.blobparalleluploadoptions). The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
+
+```java
+ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
+ .setBlockSizeLong((long) (4 * 1024 * 1024)) // 4 MiB block size
+ .setMaxConcurrency(2)
+ .setMaxSingleUploadSizeLong((long) 8 * 1024 * 1024); // 8 MiB max size for single request upload
+
+BlobUploadFromFileOptions options = new BlobUploadFromFileOptions("<localFilePath>");
+options.setParallelTransferOptions(parallelTransferOptions);
+
+Response<BlockBlobItem> blockBlob = blobClient.uploadFromFileWithResponse(options, null, null);
+```
+
+In this example, we set the maximum number of parallel transfer workers to 2 using the `setMaxConcurrency` method. We also set `maxSingleUploadSize` to 8 MiB using the `setMaxSingleUploadSizeLong` method. If the blob size is smaller than 8 MiB, only a single request is necessary to complete the upload operation. If the blob size is larger than 8 MiB, the blob is uploaded in chunks with a maximum chunk size of 4 MiB, which we set using the `setBlockSizeLong` method.
+
+### Performance considerations for uploads
+
+During an upload, the Storage client libraries split a given upload stream into multiple subuploads based on the configuration options defined by `ParallelTransferOptions`. Each subupload has its own dedicated call to the REST operation. For a `BlobClient` object, this operation is [Put Block](/rest/api/storageservices/put-block). The Storage client library manages these REST operations in parallel (depending on transfer options) to complete the full upload.
+
+> [!NOTE]
+> Block blobs have a maximum block count of 50,000 blocks. The maximum size of your block blob, then, is 50,000 times `block_size`.
+
+#### Buffering during uploads
+
+The Storage REST layer doesnΓÇÖt support picking up a REST upload operation where you left off; individual transfers are either completed or lost. To ensure resiliency for stream uploads, the Storage client libraries buffer data for each individual REST call before starting the upload. In addition to network speed limitations, this buffering behavior is a reason to consider a smaller value for `blockSize`, even when uploading in sequence. Decreasing the value of `blockSize` decreases the maximum amount of data that is buffered on each request and each retry of a failed request. If you're experiencing frequent timeouts during data transfers of a certain size, reducing the value of `blockSize` reduces the buffering time, and may result in better performance.
+
+## Performance tuning for downloads
+
+Properly tuning data transfer options is key to reliable performance for downloads. Storage transfers are partitioned into several subtransfers based on the values defined in `ParallelTransferOptions`.
+
+### Set transfer options for downloads
+
+The following values can be tuned for downloads based on the needs of your app:
+
+- `blockSize`: The maximum block size to transfer for each request. You can set this value by using the [setBlockSizeLong](/java/api/com.azure.storage.common.paralleltransferoptions#com-azure-storage-common-paralleltransferoptions-setblocksizelong(java-lang-long)) method.
+- `maxConcurrency`: The maximum number of parallel requests issued at any given time as a part of a single parallel transfer. You can set this value by using the [setMaxConcurrency](/java/api/com.azure.storage.common.paralleltransferoptions#com-azure-storage-common-paralleltransferoptions-setmaxconcurrency(java-lang-integer)) method.
+
+#### Code example
+
+Make sure you have the following `import` directive to use `ParallelTransferOptions` for a download:
+
+```java
+import com.azure.storage.common.*;
+```
+
+The following code example shows how to set values for [ParallelTransferOptions](/java/api/com.azure.storage.common.paralleltransferoptions) and include the options as part of a [BlobDownloadToFileOptions](/java/api/com.azure.storage.blob.options.blobdownloadtofileoptions) instance.
+
+```java
+ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
+ .setBlockSizeLong((long) (4 * 1024 * 1024)) // 4 MiB block size
+ .setMaxConcurrency(2);
+
+BlobDownloadToFileOptions options = new BlobDownloadToFileOptions("<localFilePath>");
+options.setParallelTransferOptions(parallelTransferOptions);
+
+blobClient.downloadToFileWithResponse(options, null, null);
+```
+
+### Performance considerations for downloads
+
+During a download, the Storage client libraries split a given download request into multiple subdownloads based on the configuration options defined by `ParallelTransferOptions`. Each subdownload has its own dedicated call to the REST operation. Depending on transfer options, the client libraries manage these REST operations in parallel to complete the full download.
+
+## Next steps
+
+- To understand more about factors that can influence performance for Azure Storage operations, see [Latency in Blob storage](storage-blobs-latency.md).
+- To see a list of design considerations to optimize performance for apps using Blob storage, see [Performance and scalability checklist for Blob storage](storage-performance-checklist.md).
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
Title: Performance tuning for uploads and downloads with Azure Storage client library for Python - Azure Storage
+ Title: Performance tuning for uploads and downloads with Azure Storage client library for Python
+ description: Learn how to tune your uploads and downloads for better performance with Azure Storage client library for Python.
ms.devlang: python
-# Performance tuning for uploads and downloads with the Azure Storage client library for Python
+# Performance tuning for uploads and downloads with Python
When an application transfers data using the Azure Storage client library for Python, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app runs in.
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
Title: Performance tuning for uploads and downloads with Azure Storage client library for .NET - Azure Storage
+ Title: Performance tuning for uploads and downloads with Azure Storage client library for .NET
+ description: Learn how to tune your uploads and downloads for better performance with Azure Storage client library for .NET.
ms.devlang: csharp
-# Performance tuning for uploads and downloads with the Azure Storage client library for .NET
+# Performance tuning for uploads and downloads with .NET
When an application transfers data using the Azure Storage client library for .NET, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app runs in.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a premium block
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Previously updated : 09/20/2023 Last updated : 09/21/2023
Converting your storage account to zone-redundancy (ZRS, GZRS or RA-GZRS) is not
### Failover and failback
-After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary).
+After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
-If you performed an [account failover](storage-disaster-recovery-guidance.md) for your GRS or RA-GRS account, the account is locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it will be LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it will be LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to add zone-redundancy.
+If you performed an account failover for your GRS or RA-GRS account, the account is locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it will be LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it will be LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to add zone-redundancy.
## Downtime requirements
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Title: Disaster recovery and storage account failover
+ Title: Azure storage disaster recovery planning and failover
-description: Azure Storage supports account failover for geo-redundant storage accounts. With account failover, you can initiate the failover process for your storage account if the primary endpoint becomes unavailable.
+description: Azure Storage supports account failover for geo-redundant storage accounts. Create a disaster recovery plan for your storage accounts if the endpoints in the primary region become unavailable.
Previously updated : 07/31/2023 Last updated : 09/22/2023
-# Disaster recovery and storage account failover
+# Azure storage disaster recovery planning and failover
-Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. If your application requires resiliency, Microsoft recommends using geo-redundant storage, so that your data is copied to a second region. Additionally, customers should have a disaster recovery plan in place for handling a regional service outage. An important part of a disaster recovery plan is preparing to fail over to the secondary endpoint in the event that the primary endpoint becomes unavailable.
+Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. Key components of a good disaster recovery plan include strategies for:
-Azure Storage supports account failover for geo-redundant storage accounts. With account failover, you can initiate the failover process for your storage account if the primary endpoint becomes unavailable. The failover updates the secondary endpoint to become the primary endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoint.
+- [Data protection](../blobs/data-protection-overview.md)
+- [Backup and restore](../../backup/index.yml)
+- [Data redundancy](storage-redundancy.md)
+- [Failover](#plan-for-storage-account-failover)
+- [Designing applications for high availability](#design-for-high-availability)
-This article describes the concepts and process involved with an account failover and discusses how to prepare your storage account for recovery with the least amount of customer impact. To learn how to initiate an account failover in the Azure portal or PowerShell, see [Initiate an account failover](storage-initiate-account-failover.md).
+This article focuses on failover for globally redundant storage accounts (GRS, GZRS, and RA-GZRS), and how to design your applications to be highly available if there's an outage and subsequent failover.
## Choose the right redundancy option
-Azure Storage maintains multiple copies of your storage account to ensure durability and high availability. Which redundancy option you choose for your account depends on the degree of resiliency you need. For protection against regional outages, configure your account for geo-redundant storage, with or without the option of read access from the secondary region:
+Azure Storage maintains multiple copies of your storage account to ensure durability and high availability. Which redundancy option you choose for your account depends on the degree of resiliency you need for your applications.
-**Geo-redundant storage (GRS) or geo-zone-redundant storage (GZRS)** copies your data asynchronously in two geographic regions that are at least hundreds of miles apart. If the primary region suffers an outage, then the secondary region serves as a redundant source for your data. You can initiate a failover to transform the secondary endpoint into the primary endpoint.
+With locally redundant storage (LRS), three copies of your storage account are automatically stored and replicated within a single datacenter. With zone-redundant storage (ZRS), a copy is stored and replicated in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
-**Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS)** provides geo-redundant storage with the additional benefit of read access to the secondary endpoint. If an outage occurs in the primary endpoint, applications configured for read access to the secondary and designed for high availability can continue to read from the secondary endpoint. Microsoft recommends RA-GZRS for maximum availability and durability for your applications.
+Recovery of a single copy of a storage account occurs automatically with LRS and ZRS.
-For more information about redundancy in Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
-
-> [!WARNING]
-> Geo-redundant storage carries a risk of data loss. Data is copied to the secondary region asynchronously, meaning there is a delay between when data written to the primary region is written to the secondary region. In the event of an outage, write operations to the primary endpoint that have not yet been copied to the secondary endpoint will be lost.
+### Globally redundant storage and failover
-## Design for high availability
-
-It's important to design your application for high availability from the start. Refer to these Azure resources for guidance in designing your application and planning for disaster recovery:
+With globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region at least hundreds of miles away. This allows you to recover your data if there's an outage in the primary region. A feature that distinguishes globally redundant storage from LRS and ZRS is the ability to fail over to the secondary region if there's an outage in the primary region. The process of failing over updates the DNS entries for your storage account service endpoints such that the endpoints for the secondary region become the new primary endpoints for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoints.
-- [Designing resilient applications for Azure](/azure/architecture/framework/resiliency/app-design): An overview of the key concepts for architecting highly available applications in Azure.-- [Resiliency checklist](/azure/architecture/checklist/resiliency-per-service): A checklist for verifying that your application implements the best design practices for high availability.-- [Use geo-redundancy to design highly available applications](geo-redundant-design.md): Design guidance for building applications to take advantage of geo-redundant storage.-- [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md): A tutorial that shows how to build a highly available application that automatically switches between endpoints as failures and recoveries are simulated.
+RA-GRS and RA-GZRS redundancy configurations provide geo-redundant storage with the added benefit of read access to the secondary endpoint if there is an outage in the primary region. If an outage occurs in the primary endpoint, applications configured for read access to the secondary region and designed for high availability can continue to read from the secondary endpoint. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
-Additionally, keep in mind these best practices for maintaining high availability for your Azure Storage data:
--- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs in the event of a regional disaster.-- **Block blobs:** Turn on [soft delete](../blobs/soft-delete-blob-overview.md) to protect against object-level deletions and overwrites, or copy block blobs to another storage account in a different region using [AzCopy](./storage-use-azcopy-v10.md), [Azure PowerShell](/powershell/module/az.storage/), or the [Azure Data Movement library](storage-use-data-movement-library.md).-- **Files:** Use [Azure Backup](../../backup/azure-file-share-backup-overview.md) to back up your file shares. Also enable [soft delete](../files/storage-files-prevent-file-share-deletion.md) to protect against accidental file share deletions. For geo-redundancy when GRS is not available, use [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy your files to another storage account in a different region.-- **Tables:** use [AzCopy](./storage-use-azcopy-v10.md) to export table data to another storage account in a different region.-
-## Track outages
-
-Customers may subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
+For more information about redundancy in Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
-Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your application should expose write failures in a way that alerts you to the possibility of an outage in the primary region.
+## Plan for storage account failover
-## Understand the account failover process
+Azure Storage accounts support two types of failover:
-Customer-managed account failover enables you to fail your entire storage account over to the secondary region if the primary becomes unavailable for any reason. When you force a failover to the secondary region, clients can begin writing data to the secondary endpoint after the failover is complete. The failover typically takes about an hour.
+- [**Customer-managed failover**](#customer-managed-failover) - Customers can manage storage account failover if there's an unexpected service outage.
+- [**Microsoft-managed failover**](#microsoft-managed-failover) - Potentially initiated by Microsoft only in the case of a severe disaster in the primary region. <sup>1,2</sup>
-### How an account failover works
+<sup>1</sup>Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more details see [Microsoft-managed failover](#microsoft-managed-failover). <br/>
+<sup>2</sup> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances. <br/>
-Under normal circumstances, a client writes data to an Azure Storage account in the primary region, and that data is copied asynchronously to the secondary region. The following image shows the scenario when the primary region is available:
+Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover :
-![Clients write data to the storage account in the primary region](media/storage-disaster-recovery-guidance/primary-available.png)
+| Type | Failover Scope | Use case | Expected data loss | HNS supported |
+||--|-|||
+| Customer-managed | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes *(In preview)*](#azure-data-lake-storage-gen2) |
+| Microsoft-managed | Entire region, datacenter or scale unit | The primary region becomes completely unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
-If the primary endpoint becomes unavailable for any reason, the client is no longer able to write to the storage account. The following image shows the scenario where the primary has become unavailable, but no recovery has happened yet:
+### Customer-managed failover
-![The primary is unavailable, so clients cannot write data](media/storage-disaster-recovery-guidance/primary-unavailable-before-failover.png)
+If the data endpoints for the storage services in your storage account become unavailable in the primary region, you can fail over to the secondary region. After the failover is complete, the secondary region becomes the new primary and users can proceed to access data in the new primary region.
-The customer initiates the account failover to the secondary endpoint. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account, as shown in the following image:
+To fully understand the impact that customer-managed account failover would have on your users and applications, it is helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
-![Customer initiates account failover to secondary endpoint](media/storage-disaster-recovery-guidance/failover-to-secondary.png)
+### Microsoft-managed failover
-Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are being directed to the new primary endpoint. Existing storage service endpoints for blobs, tables, queues, and files remain the same after the failover.
+In extreme circumstances where the original primary region is deemed unrecoverable within a reasonable amount of time due to a major disaster, Microsoft **may** initiate a regional failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you won't have write access to your storage account. Your applications can read from the secondary region if your storage account is configured for RA-GRS or RA-GZRS.
> [!IMPORTANT]
-> After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint. To resume replication to the new secondary, configure the account for geo-redundancy again.
+> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
>
-> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [Important implications of account failover](storage-initiate-account-failover.md#important-implications-of-account-failover).
+> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region, datacenter or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use [customer-managed account failover](#customer-managed-failover).
-### Anticipate data loss
+### Anticipate data loss and inconsistencies
> [!CAUTION]
-> An account failover usually involves some data loss. It's important to understand the implications of account failover before initiating one.
+> Storage account failover usually involves some data loss, and potentially file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
+
+Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, the most recent writes may not yet have been copied to the secondary.
+
+When a failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that hasn't also been copied to the secondary region is lost permanently.
-Because data is written asynchronously from the primary region to the secondary region, there is always a delay before a write to the primary region is copied to the secondary region. If the primary region becomes unavailable, the most recent writes may not yet have been copied to the secondary region.
+The new primary region is configured to be locally redundant (LRS) after the failover.
-When you force a failover, all data in the primary region is lost as the secondary region becomes the new primary region. The new primary region is configured to be locally redundant after the failover.
+You also might experience file or data inconsistencies if your storage accounts have one or more of the following enabled:
-All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that has not also been copied to the secondary is lost permanently.
+- [Hierarchical namespace (Azure Data Lake Storage Gen2)](#file-consistency-for-azure-data-lake-storage-gen2)
+- [Change feed](#change-feed-and-blob-data-inconsistencies)
+- [Point-in-time restore for block blobs](#point-in-time-restore-inconsistencies)
#### Last sync time
-The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property in the event of an outage to estimate the amount of data loss you may incur by initiating an account failover.
+The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property if there's an outage to estimate the amount of data loss you may incur by initiating an account failover.
-As a best practice, design your application so that you can use the last sync time to evaluate expected data loss. For example, if you are logging all write operations, then you can compare the time of your last write operations to the last sync time to determine which writes have not been synced to the secondary.
+As a best practice, design your application so that you can use the last sync time to evaluate expected data loss. For example, if you're logging all write operations, then you can compare the time of your last write operations to the last sync time to determine which writes haven't been synced to the secondary.
For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md). #### File consistency for Azure Data Lake Storage Gen2
-Replication for storage accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2) occurs at the file level. This means that if an outage in the primary region occurs, it is possible that only some of the files in a container or directory might have successfully replicated to the secondary region. Consistency for all files in a container or directory is not guaranteed. Take this into account when creating your disaster recovery plan.
+Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. This means if an outage in the primary region occurs, it is possible that only some of the files in a container or directory might have successfully replicated to the secondary region. Consistency for all files in a container or directory after a storage account failover is not guaranteed.
-### Use caution when failing back to the original primary
+#### Change feed and blob data inconsistencies
-After you fail over from the primary to the secondary region, your storage account is configured to be locally redundant in the new primary region. You can then configure the account in the new primary region for geo-redundancy. When the account is configured for geo-redundancy after a failover, the new primary region immediately begins copying data to the new secondary region, which was the primary before the original failover. However, it may take some time before existing data in the new primary is fully copied to the new secondary.
+Storage account failover of geo-redundant storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of both updates to the change logs and the replication of blob data from the primary to the secondary region. The only situation in which inconsistencies would not be expected is when all of the current log records have been successfully flushed to the log files, and all of the storage data has been successfully replicated from the primary to the secondary region.
-After the storage account is reconfigured for geo-redundancy, it's possible to initiate a failback from the new primary to the new secondary. In this case, the original primary region prior to the failover becomes the primary region again, and is configured to be either locally redundant or zone-redundant, depending on whether the original primary configuration was GRS/RA-GRS or GZRS/RA-GZRS. All data in the post-failover primary region (the original secondary) is lost during the failback. If most of the data in the storage account has not been copied to the new secondary before you fail back, you could suffer a major data loss.
+For information about how change feed works see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
+
+Keep in mind that other storage account features require the change feed to be enabled such as [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
-To avoid a major data loss, check the value of the **Last Sync Time** property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate expected data loss.
+#### Point-in-time restore inconsistencies
+
+Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure Portal.
+
+For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you can't restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past.
-After a failback operation, you can configure the new primary region to be geo-redundant again. If the original primary was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was configured for ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see [Change how a storage account is replicated](redundancy-migration.md).
+### The time and cost of failing over
-## Initiate an account failover
+The time it takes for failover to complete after being initiated can vary, although it typically takes less than one hour.
-You can initiate an account failover from the Azure portal, PowerShell, Azure CLI, or the Azure Storage resource provider API. For more information on how to initiate a failover, see [Initiate an account failover](storage-initiate-account-failover.md).
+A customer-managed failover loses its geo-redundancy after a failover (and failback). Your storage account is automatically converted to locally redundant storage (LRS) in the new primary region during a failover, and the storage account in the original primary region is deleted.
+You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. Also, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy, which will incur a cost. For more information about pricing, see:
-## Supported storage account types
+- [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/)
+- [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/)
-All geo-redundant offerings support [Microsoft-managed failover](#microsoft-managed-failover) in the event of a disaster in the primary region. In addition, some account types support customer-managed account failover, as shown in the following table:
+After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
+
+- The number and size of the objects in the storage account. Replicating many small objects can take longer than replicating fewer and larger objects.
+- The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.
+- If your storage account contains blobs, the number of snapshots per blob.
+- If your storage account contains tables, the [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage). The replication process can't scale beyond the number of partition keys that you use.
+
+### Supported storage account types
+
+All geo-redundant offerings support Microsoft-managed failover. In addition, some account types support customer-managed account failover, as shown in the following table:
| Type of failover | GRS/RA-GRS | GZRS/RA-GZRS | |||| | **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts | | **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
+#### Classic storage accounts
+ > [!IMPORTANT]
+> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
>
-> **Classic storage accounts**
->
-> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, is not supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region cannot currently be in a failed state.
->
-> In the event of a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](storage-disaster-recovery-guidance.md#microsoft-managed-failover).
->
-> **Azure Data Lake Storage Gen2**
->
+> if there's a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+
+#### Azure Data Lake Storage Gen2
+
+> [!IMPORTANT]
> Customer-managed account failover for accounts that have a hierarchical namespace (Azure Data Lake Storage Gen2) is currently in PREVIEW and only supported in the following regions: > > - (Asia Pacific) Central India
All geo-redundant offerings support [Microsoft-managed failover](#microsoft-mana
> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowHNSAccountFailover` as the feature name. > > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> if there's a significant disaster that affects the primary region, Microsoft will manage the failover for accounts with a hierarchical namespace. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+
+### Unsupported features and services
+
+The following features and services aren't supported for account failover:
+
+- Azure File Sync doesn't support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.
+- A storage account containing premium block blobs can't be failed over. Storage accounts that support premium block blobs don't currently support geo-redundancy.
+- Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md).
+- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). If you want to resume using SFTP after the failover is complete, simply [re-enable it](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support).
+- Network File System (NFS) 3.0 (NFSv3) isn't supported for storage account failover. You can't create a storage account configured for global-redundancy with NFSv3 enabled.
-## Additional considerations
+### Failover is not for account migration
-Review the additional considerations described in this section to understand how your applications and services may be affected when you force a failover.
+Storage account failover shouldn't be used as part of your data migration strategy. Failover is a temporary solution to a service outage. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
-### Storage account containing archived blobs
+### Storage accounts containing archived blobs
-Storage accounts containing archived blobs support account failover. After failover is complete, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
+Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-failover) is complete, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
### Storage resource provider
Because the Azure Storage resource provider does not fail over, the [Location](/
### Azure virtual machines
-Azure virtual machines (VMs) do not fail over as part of an account failover. If the primary region becomes unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover. Also, there is a potential data loss associated with the account failover. Microsoft recommends the following [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
+Azure virtual machines (VMs) don't fail over as part of an account failover. If the primary region becomes unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover. Also, there's a potential data loss associated with the account failover. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
+
+Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
### Azure unmanaged disks As a best practice, Microsoft recommends converting unmanaged disks to managed disks. However, if you need to fail over an account that contains unmanaged disks attached to Azure VMs, you will need to shut down the VM before initiating the failover.
-Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover cannot proceed when there is a lease on a blob. To perform the failover, follow these steps:
+Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. To perform the failover, follow these steps:
1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they are attached. Doing so will make it easier to reattach the disks after the failover. 2. Shut down the VM. 3. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
-4. Wait until the **Last Sync Time** has updated, and is later than the time at which you deleted the VM. This step is important, because if the secondary endpoint has not been fully updated with the VHD files when the failover occurs, then the VM may not function properly in the new primary region.
+4. Wait until the **Last Sync Time** has updated, and is later than the time at which you deleted the VM. This step is important, because if the secondary endpoint hasn't been fully updated with the VHD files when the failover occurs, then the VM may not function properly in the new primary region.
5. Initiate the account failover. 6. Wait until the account failover is complete and the secondary region has become the new primary region. 7. Create a VM in the new primary region and reattach the VHDs.
Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running
Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
-### Change feed and blob data inconsistencies
-
-Storage account failover of geo-redundant storage accounts with [the change feed](../blobs/storage-blob-change-feed.md) enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of both updates to the change logs and the replication of blob data from the primary to the secondary region. The only situation in which inconsistencies would not be expected is when all of the current log records have been successfully flushed to the log files and all of the storage data has been successfully replicated from the primary to the secondary region.
-
-For more information about how to determine potential data loss during storage account failover due to asynchronous replication, see [Anticipate data loss](#anticipate-data-loss). For information about how change feed works see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
-
-Keep in mind that other storage account features require the change feed to be enabled such as [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
-
-### Point-in-time restore
-
-Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure Portal.
-
-For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you can't restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past.
+### Copying data as an alternative to failover
-## Unsupported features and services
+If your storage account is configured for read access to the secondary region, then you can design your application to read from the secondary endpoint. If you prefer not to fail over if there's an outage in the primary region, you can use tools such as [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy data from your storage account in the secondary region to another storage account in an unaffected region. You can then point your applications to that storage account for both read and write availability.
-The following features and services are not supported for account failover:
+## Design for high availability
-- Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.-- A storage account containing premium block blobs cannot be failed over. Storage accounts that support premium block blobs do not currently support geo-redundancy.-- A storage account containing any [WORM immutability policy](../blobs/immutable-storage-overview.md) enabled containers cannot be failed over. Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.-- Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md).-- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). If you want to resume using SFTP after the failover is complete, simply [re-enable it](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support).-- Network File System (NFS) 3.0 (NFSv3) is not supported for storage account failover. You cannot create a storage account configured for global-redundancy with NFSv3 enabled.
+It's important to design your application for high availability from the start. Refer to these Azure resources for guidance in designing your application and planning for disaster recovery:
-## Copying data as an alternative to failover
+- [Designing resilient applications for Azure](/azure/architecture/framework/resiliency/app-design): An overview of the key concepts for architecting highly available applications in Azure.
+- [Resiliency checklist](/azure/architecture/checklist/resiliency-per-service): A checklist for verifying that your application implements the best design practices for high availability.
+- [Use geo-redundancy to design highly available applications](geo-redundant-design.md): Design guidance for building applications to take advantage of geo-redundant storage.
+- [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md): A tutorial that shows how to build a highly available application that automatically switches between endpoints as failures and recoveries are simulated.
-If your storage account is configured for read access to the secondary, then you can design your application to read from the secondary endpoint. If you prefer not to fail over in the event of an outage in the primary region, you can use tools such as [AzCopy](./storage-use-azcopy-v10.md), [Azure PowerShell](/powershell/module/az.storage/), or the [Azure Data Movement library](../common/storage-use-data-movement-library.md) to copy data from your storage account in the secondary region to another storage account in an unaffected region. You can then point your applications to that storage account for both read and write availability.
+Keep in mind these best practices for maintaining high availability for your Azure Storage data:
-> [!CAUTION]
-> An account failover should not be used as part of your data migration strategy.
+- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs if there's a regional disaster.
+- **Block blobs:** Turn on [soft delete](../blobs/soft-delete-blob-overview.md) to protect against object-level deletions and overwrites, or copy block blobs to another storage account in a different region using [AzCopy](./storage-use-azcopy-v10.md), [Azure PowerShell](/powershell/module/az.storage/), or the [Azure Data Movement library](storage-use-data-movement-library.md).
+- **Files:** Use [Azure Backup](../../backup/azure-file-share-backup-overview.md) to back up your file shares. Also enable [soft delete](../files/storage-files-prevent-file-share-deletion.md) to protect against accidental file share deletions. For geo-redundancy when GRS isn't available, use [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy your files to another storage account in a different region.
+- **Tables:** use [AzCopy](./storage-use-azcopy-v10.md) to export table data to another storage account in a different region.
-## Microsoft-managed failover
+## Track outages
-In extreme circumstances where the original primary region is deemed unrecoverable within a reasonable amount of time due to a major disaster, Microsoft may initiate a regional failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you won't have write access to your storage account. Your applications can read from the secondary region if your storage account is configured for RA-GRS or RA-GZRS.
+Customers may subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
-> [!NOTE]
-> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region, datacenter or scale unit. It cannot be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use customer-managed account failover described previously in this article.
+Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your application should expose write failures in a way that alerts you to the possibility of an outage in the primary region.
## See also - [Use geo-redundancy to design highly available applications](geo-redundant-design.md)-- [Initiate an account failover](storage-initiate-account-failover.md)-- [Check the Last Sync Time property for a storage account](last-sync-time-get.md) - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md)
+- [Azure Storage redundancy](storage-redundancy.md)
+- [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md)
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
+
+ Title: How Azure Storage account customer-managed failover works
+
+description: Azure Storage supports account failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed failover to the secondary region if the primary endpoint becomes unavailable.
+++++ Last updated : 09/22/2023+++++
+# How customer-managed storage account failover works
+
+Customer-managed failover of Azure Storage accounts enables you to fail over your entire geo-redundant storage account to the secondary region if the storage service endpoints for the primary region become unavailable. During failover, the original secondary region becomes the new primary and all storage service endpoints for blobs, tables, queues and files are redirected to the new primary region. After the storage service endpoint outage has been resolved, you can perform another failover operation to *fail back* to the original primary region.
+
+This article describes what happens during a customer-managed storage account failover and failback at every stage of the process.
++
+## Redundancy management during failover and failback
+
+> [!TIP]
+> To understand the various redundancy states during the storage account failover and failback process in detail, see [Azure Storage redundancy](storage-redundancy.md) for definitions of each.
+
+When a storage account is configured for GRS or RA-GRS redundancy, data is replicated three times locally within both the primary and secondary regions (LRS). When a storage account is configured for GZRS or RA-GZRS replication, data is zone-redundant within the primary region (ZRS) and replicated three times locally within the secondary region (LRS). If the account is configured for read access (RA), you will be able to read data from the secondary region as long as the storage service endpoints to that region are available.
+
+During the customer-managed failover process, the DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account. After failover, the copy of your storage account in the original primary region is deleted and your storage account continues to be replicated three times locally within the original secondary region (the new primary). At that point, your storage account becomes locally redundant (LRS).
+
+The original and current redundancy configurations are stored in the properties of the storage account to allow you eventually return to your original configuration when you fail back.
+
+To regain geo-redundancy after a failover, you will need to reconfigure your account as GRS. (GZRS is not an option post-failover since the new primary will be LRS after the failover). After the account is reconfigured for geo-redundancy, Azure immediately begins copying data from the new primary region to the new secondary. If you configure your storage account for read access (RA) to the secondary region, that access will be available but it may take some time for replication from the primary to make the secondary current.
+
+> [!WARNING]
+> After your account is reconfigured for geo-redundancy, it may take a significant amount of time before existing data in the new primary region is fully copied to the new secondary.
+>
+> **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
+
+The failback process is essentially the same as the failover process except Azure restores the replication configuration to its original state before it was failed over (the replication configuration, not the data). So, if your storage account was originally configured as GZRS, the primary region after faillback becomes ZRS.
+
+After failback, you can configure your storage account to be geo-redundant again. If the original primary region was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was configured as ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see [Change how a storage account is replicated](redundancy-migration.md).
+
+## How to initiate a failover
+
+To learn how to initiate a failover, see [Initiate a storage account failover](storage-initiate-account-failover.md).
+
+> [!CAUTION]
+> Storage account failover usually involves some data loss, and potentially file and data inconsistencies. It's important to understand the impact that an account failover would have on your data before initiating one.
+>
+> For details about potential data loss and inconsistencies, see [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+
+## The failover and failback process
+
+This section summarizes the failover process for a customer-managed failover.
+
+### Failover transition summary
+
+After a customer-managed failover:
+
+- The secondary region becomes the new primary
+- The copy of the data in the original primary region is deleted
+- The storage account is converted to LRS
+- Geo-redundancy is lost
+
+This table summarizes the resulting redundancy configuration at every stage of a customer-managed failover and failback:
+
+| Original <br> configuration | After <br> failover | After re-enabling <br> geo redundancy | After <br> failback | After re-enabling <br> geo redundancy |
+||--||||
+| GRS | LRS | GRS <sup>1</sup> | LRS |GRS <sup>1</sup> |
+| GZRS | LRS | GRS <sup>1</sup> | ZRS |GZRS <sup>1</sup> |
+
+<sup>1</sup> Geo-redundancy is lost during a customer-managed failover and must be manually reconfigured.<br>
+
+### Failover transition details
+
+The following diagrams show what happens during customer-managed failover and failback of a storage account that is configured for geo-redundancy. The transition details for GZRS and RA-GZRS are slightly different from GRS and RA-GRS.
+
+## [GRS/RA-GRS](#tab/grs-ra-grs)
+
+### Normal operation (GRS/RA-GRS)
+
+Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GRS when the primary endpoints are available:
++
+### The storage service endpoints become unavailable in the primary region (GRS/RA-GRS)
+
+If the primary storage service endpoints become unavailable for any reason (1), the client is no longer able to write to the storage account. Depending on the underlying cause of the outage, replication to the secondary region may no longer be functioning (2), so [some data loss should be expected](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). The following image shows the scenario where the primary endpoints have become unavailable, but no recovery has occurred yet:
++
+### The failover process (GRS/RA-GRS)
+
+To restore write access to your data, you can [initiate a failover](storage-initiate-account-failover.md). The storage service endpoint URIs for blobs, tables, queues, and files remain the same but their DNS entries are changed to point to the secondary region (1) as show in this image:
++
+Customer-managed failover typically takes about an hour.
+
+After the failover is complete, the original secondary becomes the new primary (1) and the copy of the storage account in the original primary is deleted (2). The storage account is configured as LRS in the new primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
++
+To resume replication to a new secondary region, reconfigure the account for geo-redundancy.
+
+> [!IMPORTANT]
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
+
+After re-configuring the account as GRS, Azure begins copying your data asynchronously to the new secondary region (1) as shown in this image:
++
+Read access to the new secondary region will not become available again until the issue causing the original outage has been resolved.
+
+### The failback process (GRS/RA-GRS)
+
+> [!WARNING]
+> After your account is reconfigured for geo-redundancy, it may take a significant amount of time before the data in the new primary region is fully copied to the new secondary.
+>
+> **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
+
+Once the issue causing the original outage has been resolved, you can initiate another failover to fail back to the original primary region, resulting in the following:
+
+1. The current primary region becomes read only.
+1. With customer-initiated failover and failback, your data is not allowed to finish replicating to the secondary region during the failback process. Therefore, it is important to check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back.
+1. The DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account.
++
+After the failback is complete, the original primary region becomes the current one again (1) and the copy of the storage account in the original secondary is deleted (2). The storage account is configured as locally redundant in the primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
++
+To resume replication to the original secondary region, configure the account for geo-redundancy again.
+
+> [!IMPORTANT]
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
+
+After re-configuring the account as GRS, replication to the original secondary region resumes as shown in this image:
++
+## [GZRS/RA-GZRS](#tab/gzrs-ra-gzrs)
+
+### Normal operation (GZRS/RA-GZRS)
+
+Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GZRS when the primary endpoints are available:
++
+### The storage service endpoints become unavailable in the primary region (GZRS/RA-GZRS)
+
+If the primary storage service endpoints become unavailable for any reason (1), the client is no longer able to write to the storage account. Depending on the underlying cause of the outage, replication to the secondary region may no longer be functioning (2), [so some data loss should be expected](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). The following image shows the scenario where the primary endpoints have become unavailable, but no recovery has occurred yet:
++
+### The failover process (GZRS/RA-GZRS)
+
+To restore write access to your data, you can [initiate a failover](storage-initiate-account-failover.md). The storage service endpoint URIs for blobs, tables, queues, and files remain the same but their DNS entries are changed to point to the secondary region (1) as show in this image:
++
+The failover typically takes about an hour.
+
+After the failover is complete, the original secondary becomes the new primary (1) and the copy of the storage account in the original primary is deleted (2). The storage account is configured as LRS in the new primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
++
+To resume replication to a new secondary region, reconfigure the account for geo-redundancy.
+
+Since the account was originally configured as GZRS, reconfiguring geo-redundancy after failover causes the original ZRS redundancy within the new secondary region (the original primary) to be retained. However, the redundancy configuration within the current primary always determines the effective geo-redundancy of a storage account. Since the current primary in this case is LRS, the effective geo-redundancy at this point is GRS, not GZRS.
+
+> [!IMPORTANT]
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
+
+After re-configuring the account as GRS, Azure begins copying your data asynchronously to the new secondary region (1) as shown in this image:
++
+Read access to the new secondary region will not become available again until the issue causing the original outage has been resolved.
+
+### The failback process (GZRS/RA-GZRS)
+
+> [!WARNING]
+> After your account is reconfigured for geo-redundancy, it may take a significant amount of time before the data in the new primary region is fully copied to the new secondary.
+>
+> **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
+
+Once the issue causing the original outage has been resolved, you can initiate another failover to fail back to the original primary region, resulting in the following:
+
+1. The current primary region becomes read only.
+1. With customer-initiated failover and failback, your data is not allowed to finish replicating to the secondary region during the failback process. Therefore, it is important to check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back.
+1. The DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account.
++
+After the failback is complete, the original primary region becomes the current one again (1) and the copy of the storage account in the original secondary is deleted (2). The storage account is configured as ZRS in the primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
++
+To resume replication to the original secondary region, configure the account for geo-redundancy again.
+
+> [!IMPORTANT]
+> Keep in mind that converting a ZRS storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
+
+After re-configuring the account as GZRS, replication to the original secondary region resumes as shown in this image:
++++
+## See also
+
+- [Disaster recovery planning and failover](storage-disaster-recovery-guidance.md)
+- [Azure Storage redundancy](storage-redundancy.md)
+- [Initiate an account failover](storage-initiate-account-failover.md)
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Previously updated : 07/21/2023 Last updated : 09/15/2023
If the primary endpoint for your geo-redundant storage account becomes unavailab
This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or Azure CLI. To learn more about account failover, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md). > [!WARNING]
-> An account failover typically results in some data loss. To understand the implications of an account failover and to prepare for data loss, review [Understand the account failover process](storage-disaster-recovery-guidance.md#understand-the-account-failover-process).
+> An account failover typically results in some data loss. To understand the implications of an account failover and to prepare for data loss, review [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
Before you can perform an account failover on your storage account, make sure th
## Initiate the failover
+You can initiate an account failover from the Azure portal, PowerShell, or the Azure CLI.
++ ## [Portal](#tab/azure-portal) To initiate an account failover from the Azure portal, follow these steps:
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 08/04/2023 Last updated : 09/06/2023
Azure Storage always stores multiple copies of your data so that it's protected
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose include: -- How your data is replicated in the primary region.
+- How your data is replicated within the primary region.
- Whether your data is replicated to a second region that is geographically distant to the primary region, to protect against regional disasters (geo-replication). - Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason (geo-replication with read access).
For a list of regions that support geo-zone-redundant storage (GZRS), see [Azure
## Read access to data in the secondary region
-Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region is not directly accessible to users or applications, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information about failover and disaster recovery, see [How an account failover works](storage-disaster-recovery-guidance.md#how-an-account-failover-works).
+Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region is not directly accessible to users or applications, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
If your applications require high availability, then you can configure your storage account for read access to the secondary region. When you enable read access to the secondary region, then your data is always available to be read from the secondary, including in a situation where the primary region becomes unavailable. Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS) configurations permit read access to the secondary region.
When read access to the secondary is enabled, your application can be read from
#### Plan for data loss
-Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster were to strike the primary region, it's likely that some data would be lost. For more information about how to plan for potential data loss, see [Anticipate data loss](storage-disaster-recovery-guidance.md#anticipate-data-loss).
+Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster were to strike the primary region, it's likely that some data would be lost and that files within a directory or container would not be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
## Summary of redundancy options
For pricing information for each redundancy option, see [Azure Storage pricing](
> [!NOTE] Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
-### Support for customer-managed account failover
-
-All geo-redundant offerings support [Microsoft-managed failover](storage-disaster-recovery-guidance.md#microsoft-managed-failover) in the event of a disaster in the primary region. In addition, some account types support customer-managed account failover, as shown in the following table:
-
-| Type of failover | GRS/RA-GRS | GZRS/RA-GZRS |
-||||
-| **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
-| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
-
-> [!IMPORTANT]
->
-> **Classic storage accounts**
->
-> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, is not supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region cannot currently be in a failed state.
->
-> In the event of a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](storage-disaster-recovery-guidance.md#microsoft-managed-failover).
->
-> **Azure Data Lake Storage Gen2**
->
-> Customer-managed account failover for accounts that have a hierarchical namespace (Azure Data Lake Storage Gen2) is currently in PREVIEW and only supported in the following regions:
->
-> - (Asia Pacific) Central India
-> - (Europe) Switzerland North
-> - (Europe) Switzerland West
-> - (North America) Canada Central
->
-> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowHNSAccountFailover` as the feature name.
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-For more information about disaster recovery and customer-managed failover, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
- ## Data integrity Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). If data corruption is detected, it's repaired using redundant data. Azure Storage also calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
storage Storage Use Azcopy Optimize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-optimize.md
This command runs a performance benchmark by uploading test data to a specified
If you prefer to run this test by downloading data, set the `mode` parameter to `download`. For detailed reference docs, see [azcopy benchmark](storage-ref-azcopy-bench.md).
-## Optimize for large numbers of small files
+## Optimize for large numbers of files
-Throughput can decrease when transferring small files, especially when transferring large numbers of them. To maximize performance, reduce the size of each job. For download and upload operations, increase concurrency, decrease log activity, and turn off features that incur high performance costs.
+Throughput can decrease when transferring large numbers of files. Each copy operation translates to one or more transactions that must be executed in the storage service. When you are transferring a large number of files, consider the number of transactions that need to be executed and any potential impact those transactions can have if other activities are occurring in the storage account at the same time.
+
+To maximize performance, you can reduce the size of each job by limiting the number of files that are copied in a single job. For download and upload operations, increase concurrency as needed, decrease log activity, and turn off features that incur high performance costs.
#### Reduce the size of each job
storage Storage Use Azcopy S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-s3.md
Gather your AWS access key and secret access key, and then set these environment
| Operating system | Command | |--|--| | **Windows** | `set AWS_ACCESS_KEY_ID=<access-key>`<br>`set AWS_SECRET_ACCESS_KEY=<secret-access-key>` |
-| **Linux** | `export AWS_ACCESS_KEY_ID=<access-key>`<br>`export AWS_SECRET_ACCESS_KEY=<secret-access-key>` |
+| **Linux** | `export AWS_ACCESS_KEY_ID=<access-key>`<br>`export AWS_SECRET_ACCESS_KEY=<secret-access-key>`|
| **macOS** | `export AWS_ACCESS_KEY_ID=<access-key>`<br>`export AWS_SECRET_ACCESS_KEY=<secret-access-key>`|
+These credentials are used to generate pre-signed URLs that are used to copy objects.
+ ## Copy objects, directories, and buckets AzCopy uses the [Put Block From URL](/rest/api/storageservices/put-block-from-url) API, so data is copied directly between AWS S3 and storage servers. These copy operations don't use the network bandwidth of your computer.
storage File Sync Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-resource-move.md
description: Learn how to move sync resources across resource groups, subscripti
Previously updated : 03/15/2023 Last updated : 09/21/2023
When storage accounts are moved to either a new subscription or are moved within
:::image type="content" source="media/storage-sync-resource-move/storage-sync-resource-move-afs-rp-registered-small.png" alt-text="An image showing the Azure portal, subscription management, registered resource providers." lightbox="media/storage-sync-resource-move/storage-sync-resource-move-afs-rp-registered.png"::: :::column-end::: :::column:::
- The Azure File Sync service principal must exist in your Azure AD tenant before you can authorize sync access to a storage account. </br></br> When you create a new Azure subscription today, the Azure File Sync resource provider *Microsoft.StorageSync* is automatically registered with your subscription. Resource provider registration will make a *service principal* for sync available in the Azure Active Directory tenant that governs the subscription. A service principal is similar to a user account in your Azure AD. You can use the Azure File Sync service principal to authorize access to resources via role-based access control (RBAC). The only resource sync needs access to is your storage accounts containing the file shares that are supposed to sync. *Microsoft.StorageSync* must be assigned to the built-in role **Reader and Data access** on the storage account. </br></br> This assignment is done automatically through the user context of the logged on user when you add a file share to a sync group, or in other words, you create a cloud endpoint. When a storage account moves to a new subscription or Azure AD tenant, this role assignment is lost and [must be manually reestablished](#establish-sync-access-to-a-storage-account).
+ The Azure File Sync service principal must exist in your Azure AD tenant before you can authorize sync access to a storage account. </br></br> When you create a new Azure subscription today, the Azure File Sync resource provider *Microsoft.StorageSync* is automatically registered with your subscription. Resource provider registration will make a *service principal* for sync available in the Azure Active Directory tenant that governs the subscription. A service principal is similar to a user account in your Azure AD. You can use the Azure File Sync service principal to authorize access to resources via role-based access control (RBAC). The only resources sync needs access to are your storage accounts containing the file shares that are supposed to sync. *Microsoft.StorageSync* must be assigned to the built-in role **Reader and Data access** on the storage account. </br></br> This assignment is done automatically through the user context of the logged on user when you add a file share to a sync group, or in other words, you create a cloud endpoint. When a storage account moves to a new subscription or Azure AD tenant, this role assignment is lost and [must be manually reestablished](#establish-sync-access-to-a-storage-account).
:::column-end::: :::row-end:::
Assigning a different region to a resource is different from a [region fail-over
## Region fail-over
-[Azure Files offers geo-redundancy options](../files/files-redundancy.md#geo-redundant-storage) for storage accounts. These redundancy options can pose problems for storage accounts used with Azure File Sync. The main reason is that replication between geographically distant regions isn't performed by Azure File Sync, but by a storage replication technology built-in to the storage subsystem in Azure. It can't have an understanding of application state and Azure File Sync is an application with files syncing to and from Azure file shares at any given moment. If you opt for any of these geographically disbursed storage redundancy options, you won't lose all of your data in a large-scale disaster. However, you need to [anticipate data loss](../common/storage-disaster-recovery-guidance.md#anticipate-data-loss).
+[Azure Files offers geo-redundancy options](../files/files-redundancy.md#geo-redundant-storage) for storage accounts. These redundancy options can pose problems for storage accounts used with Azure File Sync. The main reason is that replication between geographically distant regions isn't performed by Azure File Sync, but by a storage replication technology built-in to the storage subsystem in Azure. It can't have an understanding of application state and Azure File Sync is an application with files syncing to and from Azure file shares at any given moment. If you opt for any of these geographically disbursed storage redundancy options, you won't lose all of your data in a large-scale disaster. However, you need to account for potential [Data loss and inconsistencies](../common/storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
> [!CAUTION] > Failover is never an appropriate substitute to provisioning your resources in the correct Azure region. If your resources are in the "wrong" region, you need to consider stopping sync and setting sync up again to new Azure file shares that are deployed in your desired region.
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/system-integration.md
description: List of industry system integrators building customer solutions wit
Previously updated : 06/14/2023 Last updated : 09/21/2023
This article highlights Microsoft system integration partner companies building solutions with Azure Synapse. ## System Integration partners+ | Partner | Description | Website/Product link |
-| - | -- | -- |
-| :::image type="content" source="./media/system-integration/accenture-logo.png" alt-text="The logo of Accenture."::: |**Accenture**<br>Bringing together 45,000+ dedicated professionals, the Accenture Microsoft Business GroupΓÇöpowered by AvanadeΓÇöhelps enterprises to thrive in the era of digital disruption.|[Accenture](https://www.accenture.com/us-en/services/microsoft-index)<br>|
-| :::image type="content" source="./media/system-integration/adatis-logo.png" alt-text="The logo of Adatis."::: |**Adatis**<br>Adatis offers services that specialize in advanced data analytics, from data strategy and consultancy, to world class delivery and managed services. |[Adatis](https://adatis.co.uk/)<br> |
-| :::image type="content" source="./media/system-integration/blue-granite-logo.png" alt-text="The logo of Blue Granite."::: |**Blue Granite**<br>The BlueGranite Catalyst for Analytics is an engagement approach that features their "think big, but start small" philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform.|[Blue Granite](https://www.blue-granite.com/)<br>|
-| :::image type="content" source="./media/system-integration/capax-global-logo.png" alt-text="The logo of Capax Global."::: |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Capax Global](https://www.capaxglobal.com/)<br>|
-| :::image type="content" source="./media/system-integration/coeo-logo.png" alt-text="The logo of Coeo."::: |**Coeo**<br>Coeo's team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Coeo](https://www.coeo.com/analytics/)<br>|
-| :::image type="content" source="./media/system-integration/cognizant-logo.png" alt-text="The logo of Cognizant."::: |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Cognizant](https://www.cognizant.com/about-cognizant/partners/microsoft)<br>|
-| :::image type="content" source="./media/system-integration/neal-analytics-logo.png" alt-text="The logo of Neal Analytics."::: |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Azure AI services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Neal Analytics](https://fractal.ai/)<br>|
-| :::image type="content" source="./media/system-integration/pragmatic-works-logo.png" alt-text="The logo of Pragmatic Works."::: |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Pragmatic Works](https://www.pragmaticworks.com/)<br>|
+| | | |
+| :::image type="content" source="./media/system-integration/accenture-logo.png" alt-text="The logo of Accenture."::: | **Accenture**<br />Bringing together 45,000+ dedicated professionals, the Accenture Microsoft Business GroupΓÇöpowered by AvanadeΓÇöhelps enterprises to thrive in the era of digital disruption. | [Accenture](https://www.accenture.com/us-en/services/microsoft-index)<br />|
+| :::image type="content" source="./media/system-integration/adatis-logo.png" alt-text="The logo of Adatis."::: | **Adatis**<br />Adatis offers services that specialize in advanced data analytics, from data strategy and consultancy, to world class delivery and managed services. | [Adatis](https://adatis.co.uk/)<br />|
+| :::image type="content" source="./media/system-integration/blue-granite-logo.png" alt-text="The logo of Blue Granite."::: | **Blue Granite**<br />The BlueGranite Catalyst for Analytics is an engagement approach that features their "think big, but start small" philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform. | [Blue Granite](https://powerbi.microsoft.com/en-us/partners/bluegranite/)<br />|
+| :::image type="content" source="./media/system-integration/capax-global-logo.png" alt-text="The logo of Capax Global."::: | **Capax Global**<br />We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us. | [Capax Global](https://www.capaxglobal.com/)<br />|
+| :::image type="content" source="./media/system-integration/coeo-logo.png" alt-text="The logo of Coeo."::: | **Coeo**<br />Coeo's team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution. | [Coeo](https://www.coeo.com/analytics/)<br />|
+| :::image type="content" source="./media/system-integration/cognizant-logo.png" alt-text="The logo of Cognizant."::: | **Cognizant**<br />As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture. | [Cognizant](https://www.cognizant.com/about-cognizant/partners/microsoft)<br />|
+| :::image type="content" source="./media/system-integration/neal-analytics-logo.png" alt-text="The logo of Neal Analytics."::: | **Neal Analytics**<br />Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Azure AI services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries. | [Neal Analytics](https://fractal.ai/)<br />|
+| :::image type="content" source="./media/system-integration/pragmatic-works-logo.png" alt-text="The logo of Pragmatic Works."::: | **Pragmatic Works**<br />Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs. | [Pragmatic Works](https://www.pragmaticworks.com/)<br />|
## Next steps
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
> - [Automation Update Management](../automation/update-management/overview.md) relies on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (also called MMA agent), which is on a deprecation path and won't be supported after **August 31, 2024**. > - Update Manager is a native service in Azure and doesn't rely on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or the [Azure Monitor agent](../azure-monitor/agents/agents-overview.md). > - Follow [guidance](guidance-migration-automation-update-management-azure-update-manager.md) to migrate machines and schedules from Automation Update Management to Azure Update Manager.
-> - For customers using Automation Update Management, we recommend continuing to use the Log Analytics agent and *not* migrating to the Azure Monitor agent until migration guidance is provided for update management or else Automation Update Management won't work.
+> - If you are using Automation Update Management, we recommend that you continue to use the Log Analytics agent and *not* migrate to the Azure Monitor agent until machines and schedules are migrated to Azure Update Manager.
> - The Log Analytics agent wouldn't be deprecated before moving all Automation Update Management customers to Update Manager. > - Update Manager doesn't store any customer data.
virtual-desktop Set Up Golden Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-golden-image.md
This article will walk you through how to use the Azure portal to create a custom image to use for your Azure Virtual Desktop session hosts. This custom image, which we'll call a "golden image," contains all apps and configuration settings you want to apply to your deployment. There are other approaches to customizing your session hosts, such as using device management tools like [Microsoft Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) or automating your image build using tools like [Azure Image Builder](../virtual-machines/windows/image-builder-virtual-desktop.md) with [Azure DevOps](/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops&preserve-view=true). Which strategy works best depends on the complexity and size of your planned Azure Virtual Desktop environment and your current application deployment processes. ## Create an image from an Azure VM
-When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](prerequisites.md#operating-systems-and-licenses). We recommend using a Windows 10 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
+When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](prerequisites.md#operating-systems-and-licenses). We recommend using a Windows 10 or 11 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 or 11 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
> [!IMPORTANT] > The VM used for taking the image must be deployed without "Login with Azure AD" flag. During the deployment of Session Hosts in Azure Virtual Desktop, if you choose to add VMs to Azure Active Directory you are able to Login with AD Credentials too. ### Take your first snapshot
Here are some extra things you should keep in mind when creating a golden image:
- Make sure to remove the VM from the domain before running sysprep. - Delete the base VM once you've captured the image from it. - After you've captured your image, don't use the same VM you captured again. Instead, create a new base VM from the last snapshot you created. You'll need to periodically update and patch this new VM on a regular basis. -- Don't create a new base VM from an existing custom image.
+- Don't create a new base VM from an existing custom image. It is better to start with a brand-new source VM.
## Next steps If you want to add a language pack to your image, see [Language packs](language-packs.md).
virtual-machines How To Enable Write Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/how-to-enable-write-accelerator.md
There are limits of Azure Premium Storage VHDs per VM that can be supported by W
| VM SKU | Number of Write Accelerator disks | Write Accelerator Disk IOPS per VM | | | | |
-| M416ms_v2, M416s_v2| 16 | 20000 |
+| M416ms_v2, M416s_8_v2, M416s_v2| 16 | 20000 |
| M208ms_v2, M208s_v2| 8 | 10000 | | M192ids_v2, M192idms_v2, M192is_v2, M192ims_v2, | 16 | 20000 | | M128ms, M128s, M128ds_v2, M128dms_v2, M128s_v2, M128ms_v2 | 16 | 20000 |
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/infrastructure-automation.md
Previously updated : 02/25/2023 Last updated : 09/21/2023
To create and manage Azure virtual machines (VMs) in a consistent manner at scale, some form of automation is typically desired. There are many tools and solutions that allow you to automate the complete Azure infrastructure deployment and management lifecycle. This article introduces some of the infrastructure automation tools that you can use in Azure. These tools commonly fit in to one of the following approaches: - Automate the configuration of VMs
- - Tools include [Ansible](#ansible), [Chef](#chef), [Puppet](#puppet), and [Azure Resource Manager template](#azure-resource-manager-template).
+ - Tools include [Ansible](#ansible), [Chef](#chef), [Puppet](#puppet), [Bicep](#bicep), and [Azure Resource Manager template](#azure-resource-manager-template).
- Tools specific to VM customization include [cloud-init](#cloud-init) for Linux VMs, [PowerShell Desired State Configuration (DSC)](#powershell-dsc), and the [Azure Custom Script Extension](#azure-custom-script-extension) for all Azure VMs. - Automate infrastructure management
To create and manage Azure virtual machines (VMs) in a consistent manner at scal
- Automate application deployment and delivery - Examples include [Azure DevOps Services](#azure-devops-services) and [Jenkins](#jenkins).
-## Ansible
-[Ansible](https://www.ansible.com/) is an automation engine for configuration management, VM creation, or application deployment. Ansible uses an agent-less model, typically with SSH keys, to authenticate and manage target machines. Configuration tasks are defined in playbooks, with several Ansible modules available to carry out specific tasks. For more information, see [How Ansible works](https://www.ansible.com/how-ansible-works).
+## Terraform
+[Terraform](https://www.terraform.io) is an automation tool that allows you to define and create an entire Azure infrastructure with a single template format language - the HashiCorp Configuration Language (HCL). With Terraform, you define templates that automate the process to create network, storage, and VM resources for a given application solution. You can use your existing Terraform templates for other platforms with Azure to ensure consistency and simplify the infrastructure deployment without needing to convert to an Azure Resource Manager template.
Learn how to: -- [Install and configure Ansible on Linux for use with Azure](/azure/developer/ansible/install-on-linux-vm).-- [Create a Linux virtual machine](/azure/developer/ansible/vm-configure).-- [Manage a Linux virtual machine](/azure/developer/ansible/vm-manage).
+- [Install and configure Terraform with Azure](/azure/developer/terraform/getting-started-cloud-shell).
+- [Create an Azure infrastructure with Terraform](/azure/developer/terraform/create-linux-virtual-machine-with-infrastructure).
-## Chef
-[Chef](https://www.chef.io/) is an automation platform that helps define how your infrastructure is configured, deployed, and managed. Some components include Chef Habitat for application lifecycle automation rather than the infrastructure, and Chef InSpec that helps automate compliance with security and policy requirements. Chef Clients are installed on target machines, with one or more central Chef Servers that store and manage the configurations. For more information, see [An Overview of Chef](https://docs.chef.io/chef_overview.html).
+## Azure Automation
+[Azure Automation](https://azure.microsoft.com/services/automation/) uses runbooks to process a set of tasks on the VMs you target. Azure Automation is used to manage existing VMs rather than to create an infrastructure. Azure Automation can run across both Linux and Windows VMs, and on-premises virtual or physical machines with a hybrid runbook worker. Runbooks can be stored in a source control repository, such as GitHub. These runbooks can then run manually or on a defined schedule.
+
+Azure Automation also provides a Desired State Configuration (DSC) service that allows you to create definitions for how a given set of VMs should be configured. DSC then ensures that the required configuration is applied and the VM stays consistent. Azure Automation DSC runs on both Windows and Linux machines.
Learn how to: -- [Deploy Chef Automate from the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/chef-software.chef-automate?tab=Overview).-- [Install Chef on Windows and create Azure VMs](/azure/developer/chef/windows-vm-configure).
+- [Create a PowerShell runbook](../automation/learn/powershell-runbook-managed-identity.md).
+- [Use Hybrid Runbook Worker to manage on-premises resources](../automation/automation-hybrid-runbook-worker.md).
+- [Use Azure Automation DSC](../automation/automation-dsc-getting-started.md).
-## Puppet
-[Puppet](https://www.puppet.com) is an enterprise-ready automation platform that handles the application delivery and deployment process. Agents are installed on target machines to allow Puppet Master to run manifests that define the desired configuration of the Azure infrastructure and VMs. Puppet can integrate with other solutions such as Jenkins and GitHub for an improved devops workflow. For more information, see [How Puppet works](https://puppet.com/products/how-puppet-works).
+## Azure DevOps Services
+[Azure DevOps Services](https://www.visualstudio.com/team-services/) is a suite of tools that help you share and track code, use automated builds, and create a complete continuous integration and development (CI/CD) pipeline. Azure DevOps Services integrates with Visual Studio and other editors to simplify usage. Azure DevOps Services can also create and configure Azure VMs and then deploy code to them.
+
+Learn more about:
+
+- [Azure DevOps Services](/azure/devops/user-guide/index).
+## Azure Resource Manager template
+[Azure Resource Manager](../azure-resource-manager/templates/overview.md) is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment.
Learn how to: -- [Deploy Puppet](https://puppet.com/docs/puppet/5.5/install_windows.html).
+- [Deploy Spot VMs using a Resource Manager template](./linux/spot-template.md).
+- [Create a Windows virtual machine from a Resource Manager template](./windows/ps-template.md).
+- [Download the template for a VM](/previous-versions/azure/virtual-machines/windows/download-template).
+- [Create an Azure Image Builder template](./linux/image-builder-json.md).
+## Bicep
+[Bicep](/azure/azure-resource-manager/bicep/) is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
+Get started with the [Quickstart](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md).
## Cloud-init [Cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot process, there are no extra steps or required agents to apply your configuration. For more information on how to properly format your `#cloud-config` files, see the [cloud-init documentation site](https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data). `#cloud-config` files are text files encoded in base64.
Learn how to:
- [Create a Windows VM with Azure PowerShell and use the Custom Script Extension](/previous-versions/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-vm-iis).
-## Packer
-[Packer](https://www.packer.io) automates the build process when you create a custom VM image in Azure. You use Packer to define the OS and run post-configuration scripts that customize the VM for your specific needs. Once configured, the VM is then captured as a Managed Disk image. Packer automates the process to create the source VM, network and storage resources, run configuration scripts, and then create the VM image.
+
+## Ansible
+[Ansible](https://www.ansible.com/) is an automation engine for configuration management, VM creation, or application deployment. Ansible uses an agent-less model, typically with SSH keys, to authenticate and manage target machines. Configuration tasks are defined in playbooks, with several Ansible modules available to carry out specific tasks. For more information, see [How Ansible works](https://www.ansible.com/how-ansible-works).
Learn how to: -- [Use Packer to create a Linux VM image in Azure](./linux/build-image-with-packer.md).-- [Use Packer to create a Windows VM image in Azure](./windows/build-image-with-packer.md).
+- [Install and configure Ansible on Linux for use with Azure](/azure/developer/ansible/install-on-linux-vm).
+- [Create a Linux virtual machine](/azure/developer/ansible/vm-configure).
+- [Manage a Linux virtual machine](/azure/developer/ansible/vm-manage).
-## Terraform
-[Terraform](https://www.terraform.io) is an automation tool that allows you to define and create an entire Azure infrastructure with a single template format language - the HashiCorp Configuration Language (HCL). With Terraform, you define templates that automate the process to create network, storage, and VM resources for a given application solution. You can use your existing Terraform templates for other platforms with Azure to ensure consistency and simplify the infrastructure deployment without needing to convert to an Azure Resource Manager template.
+## Chef
+[Chef](https://www.chef.io/) is an automation platform that helps define how your infrastructure is configured, deployed, and managed. Some components include Chef Habitat for application lifecycle automation rather than the infrastructure, and Chef InSpec that helps automate compliance with security and policy requirements. Chef Clients are installed on target machines, with one or more central Chef Servers that store and manage the configurations. For more information, see [An Overview of Chef](https://docs.chef.io/chef_overview.html).
Learn how to: -- [Install and configure Terraform with Azure](/azure/developer/terraform/getting-started-cloud-shell).-- [Create an Azure infrastructure with Terraform](/azure/developer/terraform/create-linux-virtual-machine-with-infrastructure).-
+- [Deploy Chef Automate from the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/chef-software.chef-automate?tab=Overview).
+- [Install Chef on Windows and create Azure VMs](/azure/developer/chef/windows-vm-configure).
-## Azure Automation
-[Azure Automation](https://azure.microsoft.com/services/automation/) uses runbooks to process a set of tasks on the VMs you target. Azure Automation is used to manage existing VMs rather than to create an infrastructure. Azure Automation can run across both Linux and Windows VMs, and on-premises virtual or physical machines with a hybrid runbook worker. Runbooks can be stored in a source control repository, such as GitHub. These runbooks can then run manually or on a defined schedule.
-Azure Automation also provides a Desired State Configuration (DSC) service that allows you to create definitions for how a given set of VMs should be configured. DSC then ensures that the required configuration is applied and the VM stays consistent. Azure Automation DSC runs on both Windows and Linux machines.
+## Puppet
+[Puppet](https://www.puppet.com) is an enterprise-ready automation platform that handles the application delivery and deployment process. Agents are installed on target machines to allow Puppet Master to run manifests that define the desired configuration of the Azure infrastructure and VMs. Puppet can integrate with other solutions such as Jenkins and GitHub for an improved devops workflow. For more information, see [How Puppet works](https://puppet.com/products/how-puppet-works).
Learn how to: -- [Create a PowerShell runbook](../automation/learn/powershell-runbook-managed-identity.md).-- [Use Hybrid Runbook Worker to manage on-premises resources](../automation/automation-hybrid-runbook-worker.md).-- [Use Azure Automation DSC](../automation/automation-dsc-getting-started.md).
+- [Deploy Puppet](https://puppet.com/docs/puppet/5.5/install_windows.html).
-## Azure DevOps Services
-[Azure DevOps Services](https://www.visualstudio.com/team-services/) is a suite of tools that help you share and track code, use automated builds, and create a complete continuous integration and development (CI/CD) pipeline. Azure DevOps Services integrates with Visual Studio and other editors to simplify usage. Azure DevOps Services can also create and configure Azure VMs and then deploy code to them.
-Learn more about:
+## Packer
+[Packer](https://www.packer.io) automates the build process when you create a custom VM image in Azure. You use Packer to define the OS and run post-configuration scripts that customize the VM for your specific needs. Once configured, the VM is then captured as a Managed Disk image. Packer automates the process to create the source VM, network and storage resources, run configuration scripts, and then create the VM image.
-- [Azure DevOps Services](/azure/devops/user-guide/index).
+Learn how to:
+
+- [Use Packer to create a Linux VM image in Azure](./linux/build-image-with-packer.md).
+- [Use Packer to create a Windows VM image in Azure](./windows/build-image-with-packer.md).
## Jenkins
Learn how to:
- [Create a development infrastructure on a Linux VM in Azure with Jenkins, GitHub, and Docker](/azure/developer/jenkins/pipeline-with-github-and-docker).
-## Azure Resource Manager template
-[Azure Resource Manager](../azure-resource-manager/templates/overview.md) is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment.
-
-Learn how to:
-- [Deploy Spot VMs using a Resource Manager template](./linux/spot-template.md).-- [Create a Windows virtual machine from a Resource Manager template](./windows/ps-template.md).-- [Download the template for a VM](/previous-versions/azure/virtual-machines/windows/download-template).-- [Create an Azure Image Builder template](./linux/image-builder-json.md). ## Next steps There are many different options to use infrastructure automation tools in Azure. You have the freedom to use the solution that best fits your needs and environment. To get started and try some of the tools built-in to Azure, see how to automate the customization of a [Linux](./linux/tutorial-automate-vm-deployment.md) or [Windows](./windows/tutorial-automate-vm-deployment.md) VM.
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
This article assumes that you've already installed a CentOS (or similar derivati
* For more tips on preparing Linux for Azure, see [General Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes). * The VHDX format isn't supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet. If you're using VirtualBox, this means selecting **Fixed size** as opposed to the default dynamically allocated when creating the disk. * The vfat kernel module must be enabled in the kernel
-* When installing the Linux system it's **recommended** that you use standard partitions rather than LVM (often the default for many installations). This avoids LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another identical VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks.
+* When installing the Linux system, we **recommend** that you use standard partitions rather than LVM (often the default for many installations). This avoids LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another identical VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks.
* **Kernel support for mounting UDF file systems is necessary.** At first boot on Azure the provisioning configuration is passed to the Linux VM by using UDF-formatted media that is attached to the guest. The Azure Linux agent or cloud-init must mount the UDF file system to read its configuration and provision the VM.
-* Linux kernel versions below 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Centos 2.6.32 kernel, and was fixed in Centos 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37, or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command-line in grub.conf. For more information, see Red Hat [KB 436883](https://access.redhat.com/solutions/436883).
+* Linux kernel versions below 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Centos 2.6.32 kernel and was fixed in Centos 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37 or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command-line in grub.conf. For more information, see Red Hat [KB 436883](https://access.redhat.com/solutions/436883).
* Don't configure a swap partition on the OS disk. * All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD, you must ensure that the raw disk size is a multiple of 1 MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. > [!NOTE]
-> **(_Cloud-init >= 21.2 removes the udf requirement._)** however without the udf module enabled the cdrom will not mount during provisioning preventing custom data from being applied. A workaround for this would be to apply custom data using user data however, unlike custom data user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
+> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
## CentOS 6.x
This article assumes that you've already installed a CentOS (or similar derivati
sudo yum clean all ```
- Unless you're creating an image for an older version of CentOS, it's recommended to update all the packages to the latest:
+ Unless you're creating an image for an older version of CentOS, we recommend to update all the packages to the latest:
```bash sudo yum -y update
This article assumes that you've already installed a CentOS (or similar derivati
This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
- In addition to the above, it's recommended to *remove* the following parameters:
+ In addition to the above, we recommend to *remove* the following parameters:
```config rhgb quiet crashkernel=auto
This article assumes that you've already installed a CentOS (or similar derivati
15. Don't create swap space on the OS disk.
- The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in `/etc/waagent.conf` appropriately:
+ The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in `/etc/waagent.conf` appropriately:
```config ResourceDisk.Format=y
This article assumes that you've already installed a CentOS (or similar derivati
**Changes in CentOS 7 (and similar derivatives)**
-Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however there are several important differences worth noting:
+Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however there are several significant differences worth noting:
-* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default and we recommend that it'sn't removed.
+* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default, and we recommend that it's not removed.
* GRUB2 is now used as the default bootloader, so the procedure for editing kernel parameters has changed (see below). * XFS is now the default file system. The ext4 file system can still be used if desired. * Since CentOS 8 Stream and newer no longer include `network.service` by default, you need to install it manually:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
sudo yum clean all ```
- Unless you're creating an image for an older version of CentOS, it's recommended to update all the packages to the latest:
+ Unless you're creating an image for an older version of CentOS, we recommend to update all the packages to the latest:
+ ```bash sudo yum -y update ```
- A reboot maybe required after running this command.
+ A reboot may be required after running this command.
8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter, for example:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0" ```
- This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the new CentOS 7 naming conventions for NICs. In addition to the above, it's recommended to *remove* the following parameters:
+ This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the new CentOS 7 naming conventions for NICs. In addition to the above, we recommend to *remove* the following parameters:
```config rhgb quiet crashkernel=auto
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
> [!NOTE] > If uploading an UEFI enabled VM, the command to update grub is `grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg`. Also, the vfat kernel module must be enabled in the kernel otherwise provisioning will fail. >
-> Make sure the **'udf'** module is enable. Blocklisting or removing it will cause a provisioning failure. **(_Cloud-init >= 21.2 removes the udf requirement. Read top of document for more detail)**
+> Make sure the **'udf'** module is enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Read top of document for more detail.)**
10. If building the image from **VMware, VirtualBox or KVM:** Ensure the Hyper-V drivers are included in the initramfs:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
apply_network_config: False EOF
- if [[ -f /mnt/resource/swapfile ]]; then
+ if [[ -f /mnt/swapfile ]]; then
echo Removing swapfile - RHEL uses a swapfile by default
- swapoff /mnt/resource/swapfile
- rm /mnt/resource/swapfile -f
+ swapoff /mnt/swapfile
+ rm /mnt/swapfile -f
fi echo "Add console log file"
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ```
- If you want mount, format and create swap you can either:
+ If you want mount, format, and create swap, you can either:
* Pass this in as a cloud-init config every time you create a VM * Use a cloud-init directive baked into the image that will do this every time the VM is created:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt/resource"]
+ - ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ```
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
This article focuses on general guidance for running your Linux distribution on
6. Kernel support for mounting UDF file systems is necessary. At first boot on Azure the provisioning configuration is passed to the Linux VM by using UDF-formatted media that is attached to the guest. The Azure Linux agent must mount the UDF file system to read its configuration and provision the VM.
-7. Linux kernel versions earlier than 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel, and was fixed in Red Hat Enterprise Linux (RHEL) 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37, or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command line in grub.conf. For more information, see [Red Hat KB 436883](https://access.redhat.com/solutions/436883).
+7. Linux kernel versions earlier than 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel and was fixed in Red Hat Enterprise Linux (RHEL) 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37, or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command line in grub.conf. For more information, see [Red Hat KB 436883](https://access.redhat.com/solutions/436883).
8. Don't configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk, as described in the following steps.
This article focuses on general guidance for running your Linux distribution on
11. Use the most up-to-date distribution version, packages, and software.
-12. Remove users and system accounts, public keys, sensitive data, unnecessary software and application.
+12. Remove users and system accounts, public keys, sensitive data, unnecessary software, and application.
> [!NOTE]
In this case, resize the VM using either the Hyper-V Manager console or the [Res
## Linux Kernel Requirements
-The Linux Integration Services (LIS) drivers for Hyper-V and Azure are contributed directly to the upstream Linux kernel. Many distributions that include a recent Linux kernel version (such as 3.x) have these drivers available already, or otherwise provide backported versions of these drivers with their kernels. These drivers are constantly being updated in the upstream kernel with new fixes and features, so when possible we recommend running an [endorsed distribution](endorsed-distros.md) that includes these fixes and updates.
+The Linux Integration Services (LIS) drivers for Hyper-V and Azure are contributed directly to the upstream Linux kernel. Many distributions that include a recent Linux kernel version (such as 3.x) have these drivers available already, or otherwise provide backported versions of these drivers with their kernels. These drivers are constantly being updated in the upstream kernel with new fixes and features, so when possible, we recommend running an [endorsed distribution](endorsed-distros.md) that includes these fixes and updates.
-If you're running a variant of Red Hat Enterprise Linux versions 6.0 to 6.3, then you'll need to install the [latest LIS drivers for Hyper-V](https://go.microsoft.com/fwlink/p/?LinkID=254263&clcid=0x409). Beginning with RHEL 6.4+ (and derivatives) the LIS drivers are already included with the kernel and so no additional installation packages are needed.
+If you're running a variant of Red Hat Enterprise Linux versions 6.0 to 6.3, then you'll need to install the [latest LIS drivers for Hyper-V](https://go.microsoft.com/fwlink/p/?LinkID=254263&clcid=0x409). Beginning with RHEL 6.4+ (and derivatives), the LIS drivers are already included with the kernel so no additional installation packages are needed.
-If a custom kernel is required, we recommend a recent kernel version (such as 3.8+). For distributions or vendors who maintain their own kernel, you'll need to regularly backport the LIS drivers from the upstream kernel to your custom kernel. Even if you're already running a relatively recent kernel version, we highly recommend keeping track of any upstream fixes in the LIS drivers and backport them as needed. The locations of the LIS driver source files are specified in the [MAINTAINERS](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS) file in the Linux kernel source tree:
+If a custom kernel is required, we recommend a recent kernel version (such as 3.8+). For distributions or vendors who maintain their own kernel, you'll need to regularly backport the LIS drivers from the upstream kernel to your custom kernel. Even if you're already running a relatively recent kernel version, we highly recommend keeping track of any upstream fixes in the LIS drivers and backporting them as needed. The locations of the LIS driver source files are specified in the [MAINTAINERS](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS) file in the Linux kernel source tree:
``` F: arch/x86/include/asm/mshyperv.h F: arch/x86/include/uapi/asm/hyperv.h
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
```config rhgb quiet crashkernel=auto ```
- Graphical and quiet boot isn't useful in a cloud environment, where we want all logs sent to the serial port. The `crashkernel` option may be left configured if needed, but note that this parameter reduces the amount of available memory in the VM by at least 128 MB, which may be problematic for smaller VM sizes.
+ Graphical and quiet boot isn't useful in a cloud environment, where we want all logs sent to the serial port. The `crashkernel` option may be left configured if needed but note that this parameter reduces the amount of available memory in the VM by at least 128 MB, which may be problematic for smaller VM sizes.
2. After you are done editing /etc/default/grub, run the following command to rebuild the grub configuration: ```bash
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
sudo mkinitramfs -o initrd.img-<kernel-version> <kernel-version> --with=hv_vmbus,hv_netvsc,hv_storvsc sudo update-grub ```
-4. Ensure that the SSH server is installed, and configured to start at boot time. This configuration is usually the default.
+4. Ensure that the SSH server is installed and configured to start at boot time. This configuration is usually the default.
5. Install the Azure Linux Agent. The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the agent as an RPM or .deb package (the package is typically called WALinuxAgent or walinuxagent). The agent can also be installed manually by following the steps in the [Linux Agent Guide](../extensions/agent-linux.md). > [!NOTE]
- > Make sure 'udf' and 'vfat' modules are enable. `Blocklisting` or removing the udf module will cause a provisioning failure. `Blocklisting` or removing vfat module will cause both provisioning and boot failures. **(_Cloud-init >= 21.2 removes the udf requirement. Please read top of document for more detail)**
+ > Make sure 'udf' and 'vfat' modules are enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Please read top of document for more detail)**
Install the Azure Linux Agent, cloud-init and other necessary utilities by running the following command:
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
6. Swap: Do not create swap space on the OS disk.
- The Azure Linux Agent or Cloud-init can be used to configure swap space using the local resource disk. This resource disk is attached to the VM after provisioning on Azure. The local resource disk is a temporary disk, and might be emptied when the VM is deprovisioned. The following blocks show how to configure this swap.
+ The Azure Linux Agent or Cloud-init can be used to configure swap space using the local resource disk. This resource disk is attached to the VM after provisioning on Azure. The local resource disk is a temporary disk and might be emptied when the VM is deprovisioned. The following blocks show how to configure this swap.
Azure Linux Agent Modify the following parameters in /etc/waagent.conf
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
8. Deprovision. > [!CAUTION]
- > If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step. Running the command waagent -force -deprovision+user will render the source machine unusable, this step is intended only to create a generalized image.
+ > If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step. Running the command waagent -force -deprovision+user will render the source machine unusable. This step is intended only to create a generalized image.
Run the following commands to deprovision the virtual machine.
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
This article assumes that you've already installed an Ubuntu Linux operating sys
* Please see also [General Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more tips on preparing Linux for Azure. * The VHDX format isn't supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the `Convert-VHD` cmdlet.
-* When installing the Linux system it's recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
+* When installing the Linux system, it's recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
* Don't configure a swap partition or swapfile on the OS disk. The cloud-init provisioning agent can be configured to create a swap file or a swap partition on the temporary resource disk. More information about this can be found in the steps below. * All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information.
This article assumes that you've already installed an Ubuntu Linux operating sys
``` > [!Note]
- > The `walinuxagent` package may remove the `NetworkManager` and `NetworkManager-gnome` packages, if they are installed.
+ > The `walinuxagent` package may remove the `NetworkManager` and `NetworkManager-gnome` packages, if they are installed.
8. Remove cloud-init default configs and leftover `netplan` artifacts that may conflict with cloud-init provisioning on Azure:
This article assumes that you've already installed an Ubuntu Linux operating sys
> The `sudo waagent -force -deprovision+user` command generalizes the image by attempting to clean the system and make it suitable for re-provisioning. The `+user` option deletes the last provisioned user account and associated data. > [!WARNING]
- > Deprovisioning using the command above does not guarantee that the image is cleared of all sensitive information and is suitable for redistribution.
+ > Deprovisioning using the command above doesn't guarantee the image is cleared of all sensitive information and is suitable for redistribution.
```bash sudo waagent -force -deprovision+user
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
This article assumes that you've already installed an Oracle Linux operating sys
* Oracle's UEK2 isn't supported on Hyper-V and Azure as it doesn't include the required drivers. * The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet. * **Kernel support for mounting UDF file systems is required.** At first boot on Azure, the provisioning configuration is passed to the Linux VM via UDF-formatted media that is attached to the guest. The Azure Linux agent must be able to mount the UDF file system to read its configuration and provision the VM.
-* When installing the Linux system, it's recommended that you use standard partitions rather than LVM (often the default for many installations). These standard partitions avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
-* Linux kernel versions earlier than 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel, and was fixed in Oracle Linux 6.6 and later
+* When installing the Linux system, we recommend that you use standard partitions rather than LVM (often the default for many installations). These standard partitions avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
+* Linux kernel versions earlier than 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel and was fixed in Oracle Linux 6.6 and later.
* Don't configure a swap partition on the OS disk. * All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD, you must ensure that the raw disk size is a multiple of 1 MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. * Make sure that the `Addons` repository is enabled. Edit the file `/etc/yum.repos.d/public-yum-ol6.repo`(Oracle Linux 6) or `/etc/yum.repos.d/public-yum-ol7.repo`(Oracle Linux 7), and change the line `enabled=0` to `enabled=1` under **[ol6_addons]** or **[ol7_addons]** in this file.
You must complete specific configuration steps in the operating system for the v
This setting ensures all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
- In addition to the above, it's recommended to *remove* the following parameters:
+ In addition to the above, we recommend to *remove* the following parameters:
```config-grub rhgb quiet crashkernel=auto ```
- Graphical and quiet boot is not useful in a cloud environment where we want all the logs to be sent to the serial port.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port.
The `crashkernel` option may be left configured if desired, but note that this parameter reduces the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
You must complete specific configuration steps in the operating system for the v
12. Don't create swap space on the OS disk.
- The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in /etc/waagent.conf appropriately:
+ The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in /etc/waagent.conf appropriately:
```config-conf ResourceDisk.Format=y ResourceDisk.Filesystem=ext4
- ResourceDisk.MountPoint=/mnt/resource
+ ResourceDisk.MountPoint=/mnt
ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be. ```
You must complete specific configuration steps in the operating system for the v
## Oracle Linux 7.0 and later **Changes in Oracle Linux 7**
-Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux 6, however there are several important differences worth noting:
+Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux 6, however there are several significant differences worth noting:
* Azure supports Oracle Linux with either the Unbreakable Enterprise Kernel (UEK) or the Red Hat Compatible Kernel. Oracle Linux with UEK is recommended.
-* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default and we recommend that it's not removed.
+* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default, and we recommend that it's not removed.
* GRUB2 is now used as the default bootloader, so the procedure for editing kernel parameters has changed (see below). * XFS is now the default file system. The ext4 file system can still be used if desired.
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot is not useful in a cloud environment where we want all the logs to be sent to the serial port.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port.
The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
if [[ -f /mnt/resource/swapfile ]]; then echo Removing swapfile - Oracle Linux uses a swapfile by default
- swapoff /mnt/resource/swapfile
- rm /mnt/resource/swapfile -f
+ swapoff /mnt/swapfile
+ rm /mnt/swapfile -f
fi echo "Add console log file"
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
15. Swap configuration. Don't create swap space on the operating system disk.
- Previously, the Azure Linux Agent was used automatically to configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
+ Previously, the Azure Linux Agent was used automatically to configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
```bash sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ```
- If you want mount, format and create swap you can either:
+ If you want mount, format, and create swap you can either:
* Pass this in as a cloud-init config every time you create a VM * Use a cloud-init directive baked into the image that will do this every time the VM is created:
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-In this article, you'll learn how to prepare a Red Hat Enterprise Linux (RHEL) virtual machine for use in Azure. The versions of RHEL that are covered in this article are 6.X, 7.X and 8.X. The hypervisors for preparation that are covered in this article are Hyper-V, kernel-based virtual machine (KVM), and VMware. For more information about eligibility requirements for participating in Red Hat's Cloud Access program, see [Red Hat's Cloud Access website](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) and [Running RHEL on Azure](https://access.redhat.com/ecosystem/ccsp/microsoft-azure). For ways to automate building RHEL images, see [Azure Image Builder](../image-builder-overview.md).
+In this article, you'll learn how to prepare a Red Hat Enterprise Linux (RHEL) virtual machine for use in Azure. The versions of RHEL that are covered in this article are 6.X, 7.X, and 8.X. The hypervisors for preparation that are covered in this article are Hyper-V, kernel-based virtual machine (KVM), and VMware. For more information about eligibility requirements for participating in Red Hat's Cloud Access program, see [Red Hat's Cloud Access website](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) and [Running RHEL on Azure](https://access.redhat.com/ecosystem/ccsp/microsoft-azure). For ways to automate building RHEL images, see [Azure Image Builder](../image-builder-overview.md).
> [!NOTE]
-> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
+> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
## Hyper-V Manager
This section assumes that you've already obtained an ISO file from the Red Hat w
* Azure supports Gen1 (BIOS boot) & Gen2 (UEFI boot) Virtual machines. * The maximum size that's allowed for the VHD is 1,023 GB. * The vfat kernel module must be enabled in the kernel.
-* Logical Volume Manager (LVM) is supported and may be used on the OS disk or data disks in Azure virtual machines. However, in general it's recommended to use standard partitions on the OS disk rather than LVM. This practice will avoid LVM name conflicts with cloned virtual machines, particularly if you ever need to attach an operating system disk to another identical virtual machine for troubleshooting. See also [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) and [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) documentation.
+* Logical Volume Manager (LVM) is supported and may be used on the OS disk or data disks in Azure virtual machines. However, in general, we recommend using standard partitions on the OS disk rather than LVM. This practice will avoid LVM name conflicts with cloned virtual machines, particularly if you ever need to attach an operating system disk to another identical virtual machine for troubleshooting. See the [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) and [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) documentation.
* **Kernel support for mounting Universal Disk Format (UDF) file systems is required**. At first boot on Azure, the UDF-formatted media that is attached to the guest passes the provisioning configuration to the Linux virtual machine. The Azure Linux Agent must be able to mount the UDF file system to read its configuration and provision the virtual machine, without this, provisioning will fail! * Don't configure a swap partition on the operating system disk. More information about this can be found in the following steps.
This section assumes that you've already obtained an ISO file from the Red Hat w
> [!NOTE]
-> **(_Cloud-init >= 21.2 removes the udf requirement._)** however without the udf module enabled the cdrom will not mount during provisioning preventing custom data from being applied. A workaround for this would be to apply custom data using user data however, unlike custom data user data is not encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
+> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
### RHEL 6 using Hyper-V Manager > [!IMPORTANT]
-> Starting on 30 November 2020, Red Hat Enterprise Linux 6 will reach end of maintenance phase. The maintenance phase is followed by the Extended Life Phase. As Red Hat Enterprise Linux 6 transitions out of the Full/Maintenance Phases, it is strongly recommended upgrading to Red Hat Enterprise Linux 7 or 8 or 9. If customers must stay on Red Hat Enterprise Linux 6, it's recommended to add the Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-On.
+> Starting on 30 November 2020, Red Hat Enterprise Linux 6 will reach end of maintenance phase. The maintenance phase is followed by the Extended Life Phase. As Red Hat Enterprise Linux 6 transitions out of the Full/Maintenance Phases, we strongly recommend upgrading to Red Hat Enterprise Linux 7, 8, or 9. If customers must stay on Red Hat Enterprise Linux 6, we recommend adding the Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-On.
1. In Hyper-V Manager, select the virtual machine.
This section assumes that you've already obtained an ISO file from the Red Hat w
``` > [!NOTE] > ** When using Accelerated Networking (AN) the synthetic interface that is created must me configured to be unmanaged using a udev rule. This will prevents NetworkManager from assigning the same ip to it as the primary interface. <br>
- To apply it:<br>
+
+ To apply it:<br>
+ ``` sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules # Accelerated Networking on Azure exposes a new SRIOV interface to the VM.
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more. This configuration might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more. This configuration might be problematic on smaller virtual machine sizes.
11. Ensure that the secure shell (SSH) server is installed and configured to start at boot time, which is usually the default. Modify /etc/ssh/sshd_config to include the following line:
EOF
15. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: > [!NOTE]
- > If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step
+ > If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
```bash sudo waagent -force -deprovision
EOF
sudo subscription-manager register --auto-attach --username=XXX --password=XXX ```
-7. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
+7. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
```config-grub
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
8. After you're done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
EOF
13. Swap configuration. Don't create swap space on the operating system disk.
- Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
+ Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
```config ResourceDisk.Format=n ResourceDisk.EnableSwap=n ```
- If you want mount, format and create swap you can either:
+ If you want mount, format, and create swap you can either:
* Pass this in as a cloud-init config every time you create a VM through customdata. This is the recommended method. * Use a cloud-init directive baked into the image that will do this every time the VM is created.
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
7. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
EOF
sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ``` > [!NOTE]
- > If you are migrating a specific virtual machine and don't wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
+ > If you're migrating a specific virtual machine and don't wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
1. Configure mounts:
EOF
11. Swap configuration Don't create swap space on the operating system disk.
- Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
+ Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
```bash ResourceDisk.Format=n
EOF
sudo export HISTSIZE=0 ``` > [!CAUTION]
- > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
+ > If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
14. Click **Action** > **Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [**uploaded to Azure**](./upload-vhd.md#option-1-upload-a-vhd).
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
### RHEL 6 using KVM > [!IMPORTANT]
-> Starting on 30 November 2020, Red Hat Enterprise Linux 6 will reach end of maintenance phase. The maintenance phase is followed by the Extended Life Phase. As Red Hat Enterprise Linux 6 transitions out of the Full/Maintenance Phases, it is strongly recommended upgrading to Red Hat Enterprise Linux 7 or 8 or 9. If customers must stay on Red Hat Enterprise Linux 6, it's recommended to add the Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-On.
+> Starting on 30 November 2020, Red Hat Enterprise Linux 6 will reach end of maintenance phase. The maintenance phase is followed by the Extended Life Phase. As Red Hat Enterprise Linux 6 transitions out of the Full/Maintenance Phases, we strongly recommend upgrading to Red Hat Enterprise Linux 7, 8, or 9. If customers must stay on Red Hat Enterprise Linux 6, we recommend adding the Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-On.
1. Download the KVM image of RHEL 6 from the Red Hat website.
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
``` > [!NOTE] > ** When using Accelerated Networking (AN) the synthetic interface that is created must me configured to be unmanaged using a udev rule. This will prevents NetworkManager from assigning the same ip to it as the primary interface. <br>
- To apply it:<br>
+
+ To apply it:<br>
+ ``` sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules # Accelerated Networking on Azure exposes a new SRIOV interface to the VM.
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
10. Add Hyper-V modules to initramfs:
EOF
### RHEL 7 using KVM
-1. Download the KVM image of RHEL 7 from the Red Hat website. This procedure uses RHEL 7 as the example.
+1. Download the KVM image of RHEL 7 from the Red Hat website. This procedure uses RHEL 7 as an example.
2. Set a root password.
EOF
sudo subscription-manager register --auto-attach --username=XXX --password=XXX ```
-8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this configuration, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
+8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this configuration, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
```config-grub GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
9. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
This section assumes that you have already installed a RHEL virtual machine in V
``` > [!NOTE] > ** When using Accelerated Networking (AN) the synthetic interface that is created must me configured to be unmanaged using a udev rule. This will prevents NetworkManager from assigning the same ip to it as the primary interface. <br>
- To apply it:<br>
+
+To apply it:<br>
+ ``` sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules # Accelerated Networking on Azure exposes a new SRIOV interface to the VM.
EOF
sudo subscription-manager repos --enable=rhel-6-server-extras-rpms ```
-8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
+8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
```config-grub GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0"
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
9. Add Hyper-V modules to initramfs:
EOF
14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: > [!NOTE]
- > If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step
+ > If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
```bash sudo rm -rf /var/lib/waagent/
EOF
sudo subscription-manager register --auto-attach --username=XXX --password=XXX ```
-5. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/etc/default/grub` in a text editor, and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
+5. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
```config-grub GRUB_CMDLINE_LINUX="console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
EOF
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
6. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
EOF
Follow the steps in 'Prepare a RHEL 7 virtual machine from Hyper-V Manager', step 15, 'Deprovision'
-15. Shut down the virtual machine, and convert the VMDK file to the VHD format.
+15. Shut down the virtual machine and convert the VMDK file to the VHD format.
> [!NOTE] > There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed in QEMU 2.6. It is recommended to use either qemu-img 2.2.0 or lower, or update to 2.6 or higher. Reference: https://bugs.launchpad.net/qemu/+bug/1490611.
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
### RHEL 7 from a kickstart file
-1. Create a kickstart file that includes the following content, and save the file. For details about kickstart installation, see the [Kickstart Installation Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/chap-kickstart-installations).
+1. Create a kickstart file that includes the following content and save the file. For details about kickstart installation, see the [Kickstart Installation Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/chap-kickstart-installations).
```text # Kickstart for provisioning a RHEL 7 Azure VM
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt/resource"]
+ - ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.device-timeout=2,x-systemd.requires=cloud-init.service", "0", "0"] EOF
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md
This article assumes that you have already installed a SUSE or openSUSE Leap Lin
> [!NOTE]
-> **(_Cloud-init >= 21.2 removes the udf requirement._)** however without the udf module enabled the cdrom will not mount during provisioning preventing custom data from being applied. A workaround for this would be to apply custom data using user data however, unlike custom data user data is not encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
+> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
## Use SUSE Studio
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
Don't create swap space on the operating system disk.
- Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this step is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk or create the swap file. Use these commands to modify `/etc/waagent.conf` appropriately:
+
+ Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this step is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk or create the swap file. Use these commands to modify `/etc/waagent.conf` appropriately:
```bash sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
For more information on the waagent.conf configuration options, see the [Linux agent configuration](../extensions/agent-linux.md#configuration) documentation.
- If you want to mount, format and create a swap partition you can either:
+ If you want to mount, format, and create a swap partition you can either:
* Pass this configuration in as a cloud-init config every time you create a VM. * Use a cloud-init directive baked into the image that configures swap space every time the VM is created:
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt/ressource"]
+ - ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ``` > [!NOTE]
-> Make sure the **'udf'** module is enabled. Blocklisting or removing it will cause a provisioning failure. **(_Cloud-init >= 21.2 removes the udf requirement. Please read top of document for more detail)**
+> Make sure the **'udf'** module is enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Please read top of document for more detail)**
15. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: > [!NOTE]
-> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step
+> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
```bash sudo rm -f /var/log/waagent.log
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
sudo zypper ar -f http://download.opensuse.org/update/15.2 openSUSE_15.2_Updates ```
- You can then verify the repositories have been added by running the command '`zypper lr`' again. If one of the relevant update repositories isn't enabled, enable it with following command:
+ You can then verify the repositories have been added by running the command '`zypper lr`' again. If one of the relevant update repositories isn't enabled, enable it with the following command:
```bash sudo zypper mr -e [NUMBER OF REPOSITORY]
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
9. Ensure that the SSH server is installed and configured to start at boot time. 10. Don't create swap space on the OS disk.
- The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Note that the local resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in the "/etc/waagent.conf" as follows:
+ The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Note that the local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in the "/etc/waagent.conf" as follows:
```config-conf ResourceDisk.Format=y
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
12. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: > [!NOTE]
-> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step
+> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
```bash sudo rm -f ~/.bash_history # Remove current user history
web-application-firewall Create Custom Waf Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-custom-waf-rules.md
Corresponding JSON:
## Example 7
-It isn't uncommon to see Azure Front Door deployed in front of Application Gateway. In order to make sure the traffic received by Application Gateway comes from the Front Door deployment, the best practice is to check if the `X-Azure-FDID` header contains the expected unique value. For more information on securing access to your application using Azure Front Door, see [How to lock down the access to my backend to only Azure Front Door](../../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-)
+It isn't uncommon to see Azure Front Door deployed in front of Application Gateway. In order to make sure the traffic received by Application Gateway comes from the Front Door deployment, the best practice is to check if the `X-Azure-FDID` header contains the expected unique value. For more information on securing access to your application using Azure Front Door, see [How to lock down the access to my backend to only Azure Front Door](../../frontdoor/front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-)
Logic: **not** p
web-application-firewall Application Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/application-ddos-protection.md
Application Gateway WAF SKUs can be used to mitigate many L7 DDoS attacks:
## Other considerations
-* Lock down access to public IPs on origin and restrict inbound traffic to only allow traffic from Azure Front Door or Application Gateway to origin. Refer to the [guidance on Azure Front Door](../../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-). Application Gateways are deployed in a virtual network, ensure there isn't any publicly exposed IPs.
+* Lock down access to public IPs on origin and restrict inbound traffic to only allow traffic from Azure Front Door or Application Gateway to origin. Refer to the [guidance on Azure Front Door](../../frontdoor/front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-). Application Gateways are deployed in a virtual network, ensure there isn't any publicly exposed IPs.
* Switch WAF policy to the prevention mode. Deploying the policy in detection mode operates in the log only and doesn't block traffic. After verifying and testing your WAF policy with production traffic and fine tuning to reduce any false positives, you should turn policy to Prevention mode (block/defend mode).